Skip to content

Commit 762e7be

Browse files
committed
apply review comment
1 parent 69a2cf6 commit 762e7be

File tree

1 file changed

+24
-8
lines changed

1 file changed

+24
-8
lines changed

docs/reference/elasticsearch/mapping-reference/dense-vector.md

Lines changed: 24 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,10 @@ PUT my-index-2
103103
{{es}} uses the [HNSW algorithm](https://arxiv.org/abs/1603.09320) to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved speed.
104104

105105
## Accessing `dense_vector` fields in search responses
106+
```{applies_to}
107+
stack: ga 9.2
108+
serverless: ga
109+
```
106110

107111
By default, `dense_vector` fields are **not included in `_source`** in responses from the `_search`, `_msearch`, `_get`, and `_mget` APIs.
108112
This helps reduce response size and improve performance, especially in scenarios where vectors are used solely for similarity scoring and not required in the output.
@@ -130,6 +134,10 @@ POST my-index-2/_search
130134
```
131135

132136
### Storage behavior and `_source`
137+
```{applies_to}
138+
stack: ga 9.2
139+
serverless: ga
140+
```
133141

134142
By default, `dense_vector` fields are **not stored in `_source`** on disk. This is also controlled by the index setting `index.mapping.exclude_source_vectors`.
135143
This setting is enabled by default for newly created indices and can only be set at index creation time.
@@ -142,10 +150,18 @@ When enabled:
142150
This setting is compatible with synthetic `_source`, where the entire `_source` document is reconstructed from columnar storage. In full synthetic mode, no `_source` is stored on disk, and all fields — including vectors — are rebuilt when needed.
143151

144152
### Rehydration and precision
153+
```{applies_to}
154+
stack: ga 9.2
155+
serverless: ga
156+
```
145157

146158
When vector values are rehydrated (e.g., for reindex, recovery, or explicit `_source` requests), they are restored from their internal format. Internally, vectors are stored at float precision, so if they were originally indexed as higher-precision types (e.g., `double` or `long`), the rehydrated values will have reduced precision. This lossy representation is intended to save space while preserving search quality.
147159

148160
### Storing original vectors in `_source`
161+
```{applies_to}
162+
stack: ga 9.2
163+
serverless: ga
164+
```
149165

150166
If you want to preserve the original vector values exactly as they were provided, you can re-enable vector storage in `_source`:
151167

@@ -337,16 +353,16 @@ $$$dense-vector-index-options$$$
337353
`type`
338354
: (Required, string) The type of kNN algorithm to use. Can be either any of:
339355
* `hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for scalable approximate kNN search. This supports all `element_type` values.
340-
* `int8_hnsw` - The default index type for some float vectors:
341-
342-
* {applies_to}`stack: ga 9.1` Default for float vectors with less than 384 dimensions.
356+
* `int8_hnsw` - The default index type for some float vectors:
357+
358+
* {applies_to}`stack: ga 9.1` Default for float vectors with less than 384 dimensions.
343359
* {applies_to}`stack: ga 9.0` Default for float all vectors.
344-
360+
345361
This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 4x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
346362
* `int4_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 8x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
347363
* `bbq_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically binary quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 32x at the cost of accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
348-
349-
{applies_to}`stack: ga 9.1` `bbq_hnsw` is the default index type for float vectors with greater than or equal to 384 dimensions.
364+
365+
{applies_to}`stack: ga 9.1` `bbq_hnsw` is the default index type for float vectors with greater than or equal to 384 dimensions.
350366
* `flat` - This utilizes a brute-force search algorithm for exact kNN search. This supports all `element_type` values.
351367
* `int8_flat` - This utilizes a brute-force search algorithm in addition to automatically scalar quantization. Only supports `element_type` of `float`.
352368
* `int4_flat` - This utilizes a brute-force search algorithm in addition to automatically half-byte scalar quantization. Only supports `element_type` of `float`.
@@ -366,8 +382,8 @@ $$$dense-vector-index-options$$$
366382
: (Optional, object) An optional section that configures automatic vector rescoring on knn queries for the given field. Only applicable to quantized index types.
367383
:::::{dropdown} Properties of rescore_vector
368384
`oversample`
369-
: (required, float) The amount to oversample the search results by. This value should be one of the following:
370-
* Greater than `1.0` and less than `10.0`
385+
: (required, float) The amount to oversample the search results by. This value should be one of the following:
386+
* Greater than `1.0` and less than `10.0`
371387
* Exactly `0` to indicate no oversampling and rescoring should occur {applies_to}`stack: ga 9.1`
372388
: The higher the value, the more vectors will be gathered and rescored with the raw values per shard.
373389
: In case a knn query specifies a `rescore_vector` parameter, the query `rescore_vector` parameter will be used instead.

0 commit comments

Comments
 (0)