Skip to content

Commit b3a06fb

Browse files
authored
[DOCS] Fix indentation of dense vector lists (#133609)
1 parent 5f92b97 commit b3a06fb

File tree

1 file changed

+17
-20
lines changed

1 file changed

+17
-20
lines changed

docs/reference/elasticsearch/mapping-reference/dense-vector.md

Lines changed: 17 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -115,23 +115,23 @@ To retrieve vector values explicitly, you can use:
115115

116116
* The `fields` option to request specific vector fields directly:
117117

118-
```console
119-
POST my-index-2/_search
120-
{
121-
"fields": ["my_vector"]
122-
}
123-
```
118+
```console
119+
POST my-index-2/_search
120+
{
121+
"fields": ["my_vector"]
122+
}
123+
```
124124

125125
- The `_source.exclude_vectors` flag to re-enable vector inclusion in `_source` responses:
126126

127-
```console
128-
POST my-index-2/_search
129-
{
130-
"_source": {
131-
"exclude_vectors": false
127+
```console
128+
POST my-index-2/_search
129+
{
130+
"_source": {
131+
"exclude_vectors": false
132+
}
132133
}
133-
}
134-
```
134+
```
135135

136136
### Storage behavior and `_source`
137137

@@ -309,7 +309,7 @@ $$$dense-vector-similarity$$$
309309
`l2_norm`
310310
: Computes similarity based on the L2 distance (also known as Euclidean distance) between the vectors. The document `_score` is computed as `1 / (1 + l2_norm(query, vector)^2)`.
311311

312-
For `bit` vectors, instead of using `l2_norm`, the `hamming` distance between the vectors is used. The `_score` transformation is `(numBits - hamming(a, b)) / numBits`
312+
For `bit` vectors, instead of using `l2_norm`, the `hamming` distance between the vectors is used. The `_score` transformation is `(numBits - hamming(a, b)) / numBits`
313313

314314
`dot_product`
315315
: Computes the dot product of two unit vectors. This option provides an optimized way to perform cosine similarity. The constraints and computed score are defined by `element_type`.
@@ -341,15 +341,13 @@ $$$dense-vector-index-options$$$
341341
`type`
342342
: (Required, string) The type of kNN algorithm to use. Can be either any of:
343343
* `hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for scalable approximate kNN search. This supports all `element_type` values.
344-
* `int8_hnsw` - The default index type for some float vectors:
345-
344+
* `int8_hnsw` - The default index type for some float vectors:
346345
* {applies_to}`stack: ga 9.1` Default for float vectors with less than 384 dimensions.
347346
* {applies_to}`stack: ga 9.0` Default for float all vectors.
348-
349347
This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 4x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
350348
* `int4_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically scalar quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 8x at the cost of some accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
351349
* `bbq_hnsw` - This utilizes the [HNSW algorithm](https://arxiv.org/abs/1603.09320) in addition to automatically binary quantization for scalable approximate kNN search with `element_type` of `float`. This can reduce the memory footprint by 32x at the cost of accuracy. See [Automatically quantize vectors for kNN search](#dense-vector-quantization).
352-
350+
353351
{applies_to}`stack: ga 9.1` `bbq_hnsw` is the default index type for float vectors with greater than or equal to 384 dimensions.
354352
* `flat` - This utilizes a brute-force search algorithm for exact kNN search. This supports all `element_type` values.
355353
* `int8_flat` - This utilizes a brute-force search algorithm in addition to automatically scalar quantization. Only supports `element_type` of `float`.
@@ -365,7 +363,6 @@ $$$dense-vector-index-options$$$
365363
`confidence_interval`
366364
: (Optional, float) Only applicable to `int8_hnsw`, `int4_hnsw`, `int8_flat`, and `int4_flat` index types. The confidence interval to use when quantizing the vectors. Can be any value between and including `0.90` and `1.0` or exactly `0`. When the value is `0`, this indicates that dynamic quantiles should be calculated for optimized quantization. When between `0.90` and `1.0`, this value restricts the values used when calculating the quantization thresholds. For example, a value of `0.95` will only use the middle 95% of the values when calculating the quantization thresholds (e.g. the highest and lowest 2.5% of values will be ignored). Defaults to `1/(dims + 1)` for `int8` quantized vectors and `0` for `int4` for dynamic quantile calculation.
367365

368-
369366
`rescore_vector` {applies_to}`stack: preview 9.0, ga 9.1`
370367
: (Optional, object) An optional section that configures automatic vector rescoring on knn queries for the given field. Only applicable to quantized index types.
371368
:::::{dropdown} Properties of rescore_vector
@@ -386,7 +383,7 @@ $$$dense-vector-index-options$$$
386383
`dense_vector` fields support [synthetic `_source`](/reference/elasticsearch/mapping-reference/mapping-source-field.md#synthetic-source) .
387384

388385

389-
## Indexing & Searching bit vectors [dense-vector-index-bit]
386+
## Indexing and searching bit vectors [dense-vector-index-bit]
390387

391388
When using `element_type: bit`, this will treat all vectors as bit vectors. Bit vectors utilize only a single bit per dimension and are internally encoded as bytes. This can be useful for very high-dimensional vectors or models.
392389

0 commit comments

Comments
 (0)