Skip to content

Commit aea4853

Browse files
authored
[Docs] kNN vector rescoring for quantized vectors (#118425)
1 parent 408f473 commit aea4853

File tree

6 files changed

+187
-85
lines changed

6 files changed

+187
-85
lines changed

docs/reference/mapping/types/dense-vector.asciidoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,11 +121,13 @@ The three following quantization strategies are supported:
121121
* `bbq` - experimental:[] Better binary quantization which reduces each dimension to a single bit precision. This reduces the memory footprint by 96% (or 32x) at a larger cost of accuracy. Generally, oversampling during query time and reranking can help mitigate the accuracy loss.
122122

123123

124-
When using a quantized format, you may want to oversample and rescore the results to improve accuracy. See <<dense-vector-knn-search-reranking, oversampling and rescoring>> for more information.
124+
When using a quantized format, you may want to oversample and rescore the results to improve accuracy. See <<dense-vector-knn-search-rescoring, oversampling and rescoring>> for more information.
125125

126126
To use a quantized index, you can set your index type to `int8_hnsw`, `int4_hnsw`, or `bbq_hnsw`. When indexing `float` vectors, the current default
127127
index type is `int8_hnsw`.
128128

129+
Quantized vectors can use <<dense-vector-knn-search-rescoring,oversampling and rescoring>> to improve accuracy on approximate kNN search results.
130+
129131
NOTE: Quantization will continue to keep the raw float vector values on disk for reranking, reindexing, and quantization improvements over the lifetime of the data.
130132
This means disk usage will increase by ~25% for `int8`, ~12.5% for `int4`, and ~3.1% for `bbq` due to the overhead of storing the quantized and raw vectors.
131133

docs/reference/query-dsl/knn-query.asciidoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,9 @@ documents are then scored according to <<dense-vector-similarity, `similarity`>>
137137
and the provided `boost` is applied.
138138
--
139139

140+
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-rescore-vector]
141+
142+
140143
`boost`::
141144
+
142145
--

docs/reference/rest-api/common-parms.asciidoc

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1356,3 +1356,27 @@ tag::rrf-filter[]
13561356
Applies the specified <<query-dsl-bool-query, boolean query filter>> to all of the specified sub-retrievers,
13571357
according to each retriever's specifications.
13581358
end::rrf-filter[]
1359+
1360+
tag::knn-rescore-vector[]
1361+
1362+
`rescore_vector`::
1363+
+
1364+
--
1365+
(Optional, object) Functionality in preview:[]. Apply oversampling and rescoring to quantized vectors.
1366+
1367+
NOTE: Rescoring only makes sense for quantized vectors; when <<dense-vector-quantization,quantization>> is not used, the original vectors are used for scoring.
1368+
Rescore option will be ignored for non-quantized `dense_vector` fields.
1369+
1370+
`oversample`::
1371+
(Required, float)
1372+
+
1373+
Applies the specified oversample factor to `k` on the approximate kNN search.
1374+
The approximate kNN search will:
1375+
1376+
* Retrieve `num_candidates` candidates per shard.
1377+
* From these candidates, the top `k * oversample` candidates per shard will be rescored using the original vectors.
1378+
* The top `k` rescored candidates will be returned.
1379+
1380+
See <<dense-vector-knn-search-rescoring,oversampling and rescoring quantized vectors>> for details.
1381+
--
1382+
end::knn-rescore-vector[]

docs/reference/search/retriever.asciidoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -233,6 +233,8 @@ include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-filter]
233233
+
234234
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-similarity]
235235

236+
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-rescore-vector]
237+
236238
===== Restrictions
237239

238240
The parameters `query_vector` and `query_vector_builder` cannot be used together.
@@ -576,15 +578,15 @@ This example demonstrates how to deploy the {ml-docs}/ml-nlp-rerank.html[Elastic
576578

577579
Follow these steps:
578580

579-
. Create an inference endpoint for the `rerank` task using the <<put-inference-api, Create {infer} API>>.
581+
. Create an inference endpoint for the `rerank` task using the <<put-inference-api, Create {infer} API>>.
580582
+
581583
[source,console]
582584
----
583585
PUT _inference/rerank/my-elastic-rerank
584586
{
585587
"service": "elasticsearch",
586588
"service_settings": {
587-
"model_id": ".rerank-v1",
589+
"model_id": ".rerank-v1",
588590
"num_threads": 1,
589591
"adaptive_allocations": { <1>
590592
"enabled": true,
@@ -595,7 +597,7 @@ PUT _inference/rerank/my-elastic-rerank
595597
}
596598
----
597599
// TEST[skip:uses ML]
598-
<1> {ml-docs}/ml-nlp-auto-scale.html#nlp-model-adaptive-allocations[Adaptive allocations] will be enabled with the minimum of 1 and the maximum of 10 allocations.
600+
<1> {ml-docs}/ml-nlp-auto-scale.html#nlp-model-adaptive-allocations[Adaptive allocations] will be enabled with the minimum of 1 and the maximum of 10 allocations.
599601
+
600602
. Define a `text_similarity_rerank` retriever:
601603
+

docs/reference/search/search-your-data/knn-search.asciidoc

Lines changed: 150 additions & 81 deletions
Original file line numberDiff line numberDiff line change
@@ -781,7 +781,7 @@ What if you wanted to filter by some top-level document metadata? You can do thi
781781

782782

783783
NOTE: `filter` will always be over the top-level document metadata. This means you cannot filter based on `nested`
784-
field metadata.
784+
field metadata.
785785

786786
[source,console]
787787
----
@@ -1068,100 +1068,77 @@ NOTE: Approximate kNN search always uses the
10681068
the global top `k` matches across shards. You cannot set the
10691069
`search_type` explicitly when running kNN search.
10701070

1071+
10711072
[discrete]
1072-
[[exact-knn]]
1073-
=== Exact kNN
1073+
[[dense-vector-knn-search-rescoring]]
1074+
==== Oversampling and rescoring for quantized vectors
10741075

1075-
To run an exact kNN search, use a `script_score` query with a vector function.
1076+
When using <<dense-vector-quantization,quantized vectors>> for kNN search, you can optionally rescore results to balance performance and accuracy, by doing:
10761077

1077-
. Explicitly map one or more `dense_vector` fields. If you don't intend to use
1078-
the field for approximate kNN, set the `index` mapping option to `false`. This
1079-
can significantly improve indexing speed.
1080-
+
1081-
[source,console]
1082-
----
1083-
PUT product-index
1084-
{
1085-
"mappings": {
1086-
"properties": {
1087-
"product-vector": {
1088-
"type": "dense_vector",
1089-
"dims": 5,
1090-
"index": false
1091-
},
1092-
"price": {
1093-
"type": "long"
1094-
}
1095-
}
1096-
}
1097-
}
1098-
----
1078+
* *Oversampling*: Retrieve more candidates per shard.
1079+
* *Rescoring*: Use the original vector values for re-calculating the score on the oversampled candidates.
10991080

1100-
. Index your data.
1101-
+
1102-
[source,console]
1103-
----
1104-
POST product-index/_bulk?refresh=true
1105-
{ "index": { "_id": "1" } }
1106-
{ "product-vector": [230.0, 300.33, -34.8988, 15.555, -200.0], "price": 1599 }
1107-
{ "index": { "_id": "2" } }
1108-
{ "product-vector": [-0.5, 100.0, -13.0, 14.8, -156.0], "price": 799 }
1109-
{ "index": { "_id": "3" } }
1110-
{ "product-vector": [0.5, 111.3, -13.0, 14.8, -156.0], "price": 1099 }
1111-
...
1112-
----
1113-
//TEST[continued]
1114-
//TEST[s/\.\.\.//]
1081+
As the non-quantized, original vectors are used to calculate the final score on the top results, rescoring combines:
1082+
1083+
* The performance and memory gains of approximate retrieval using quantized vectors for retrieving the top candidates.
1084+
* The accuracy of using the original vectors for rescoring the top candidates.
1085+
1086+
All forms of quantization will result in some accuracy loss and as the quantization level increases the accuracy loss will also increase.
1087+
Generally, we have found that:
1088+
1089+
* `int8` requires minimal if any rescoring
1090+
* `int4` requires some rescoring for higher accuracy and larger recall scenarios. Generally, oversampling by 1.5x-2x recovers most of the accuracy loss.
1091+
* `bbq` requires rescoring except on exceptionally large indices or models specifically designed for quantization. We have found that between 3x-5x oversampling is generally sufficient. But for fewer dimensions or vectors that do not quantize well, higher oversampling may be required.
1092+
1093+
You can use the `rescore_vector` preview:[] option to automatically perform reranking.
1094+
When a rescore `oversample` parameter is specified, the approximate kNN search will:
1095+
1096+
* Retrieve `num_candidates` candidates per shard.
1097+
* From these candidates, the top `k * oversample` candidates per shard will be rescored using the original vectors.
1098+
* The top `k` rescored candidates will be returned.
1099+
1100+
Here is an example of using the `rescore_vector` option with the `oversample` parameter:
11151101

1116-
. Use the <<search-search,search API>> to run a `script_score` query containing
1117-
a <<vector-functions,vector function>>.
1118-
+
1119-
TIP: To limit the number of matched documents passed to the vector function, we
1120-
recommend you specify a filter query in the `script_score.query` parameter. If
1121-
needed, you can use a <<query-dsl-match-all-query,`match_all` query>> in this
1122-
parameter to match all documents. However, matching all documents can
1123-
significantly increase search latency.
1124-
+
11251102
[source,console]
11261103
----
1127-
POST product-index/_search
1104+
POST image-index/_search
11281105
{
1129-
"query": {
1130-
"script_score": {
1131-
"query" : {
1132-
"bool" : {
1133-
"filter" : {
1134-
"range" : {
1135-
"price" : {
1136-
"gte": 1000
1137-
}
1138-
}
1139-
}
1140-
}
1141-
},
1142-
"script": {
1143-
"source": "cosineSimilarity(params.queryVector, 'product-vector') + 1.0",
1144-
"params": {
1145-
"queryVector": [-0.5, 90.0, -10, 14.8, -156.0]
1146-
}
1147-
}
1106+
"knn": {
1107+
"field": "image-vector",
1108+
"query_vector": [-5, 9, -12],
1109+
"k": 10,
1110+
"num_candidates": 100,
1111+
"rescore_vector": {
1112+
"oversample": 2.0
11481113
}
1149-
}
1114+
},
1115+
"fields": [ "title", "file-type" ]
11501116
}
11511117
----
11521118
//TEST[continued]
1119+
// TEST[s/"k": 10/"k": 3/]
1120+
// TEST[s/"num_candidates": 100/"num_candidates": 3/]
1121+
1122+
This example will:
1123+
1124+
* Search using approximate kNN for the top 100 candidates.
1125+
* Rescore the top 20 candidates (`oversample * k`) per shard using the original, non quantized vectors.
1126+
* Return the top 10 (`k`) rescored candidates.
1127+
* Merge the rescored canddidates from all shards, and return the top 10 (`k`) results.
11531128

11541129
[discrete]
1155-
[[dense-vector-knn-search-reranking]]
1156-
==== Oversampling and rescoring for quantized vectors
1130+
[[dense-vector-knn-search-rescoring-rescore-additional]]
1131+
===== Additional rescoring techniques
11571132

1158-
All forms of quantization will result in some accuracy loss and as the quantization level increases the accuracy loss will also increase.
1159-
Generally, we have found that:
1160-
- `int8` requires minimal if any rescoring
1161-
- `int4` requires some rescoring for higher accuracy and larger recall scenarios. Generally, oversampling by 1.5x-2x recovers most of the accuracy loss.
1162-
- `bbq` requires rescoring except on exceptionally large indices or models specifically designed for quantization. We have found that between 3x-5x oversampling is generally sufficient. But for fewer dimensions or vectors that do not quantize well, higher oversampling may be required.
1133+
The following sections provide additional ways of rescoring:
1134+
1135+
[discrete]
1136+
[[dense-vector-knn-search-rescoring-rescore-section]]
1137+
====== Use the `rescore` section for top-level kNN search
1138+
1139+
You can use this option when you don't want to rescore on each shard, but on the top results from all shards.
11631140

1164-
There are two main ways to oversample and rescore. The first is to utilize the <<rescore, rescore section>> in the `_search` request.
1141+
Use the <<rescore, rescore section>> in the `_search` request to rescore the top results from a kNN search.
11651142

11661143
Here is an example using the top level `knn` search with oversampling and using `rescore` to rerank the results:
11671144

@@ -1210,8 +1187,16 @@ gathering 20 nearest neighbors according to quantized scoring and rescoring with
12101187
<5> The weight of the original query, here we simply throw away the original score
12111188
<6> The weight of the rescore query, here we only use the rescore query
12121189

1213-
The second way is to score per shard with the <<query-dsl-knn-query, knn query>> and <<query-dsl-script-score-query, script_score query >>. Generally, this means that there will be more rescoring per shard, but this
1214-
can increase overall recall at the cost of compute.
1190+
1191+
[discrete]
1192+
[[dense-vector-knn-search-rescoring-script-score]]
1193+
====== Use a `script_score` query to rescore per shard
1194+
1195+
You can use this option when you want to rescore on each shard and want more fine-grained control on the rescoring
1196+
than the `rescore_vector` option provides.
1197+
1198+
Use rescore per shard with the <<query-dsl-knn-query, knn query>> and <<query-dsl-script-score-query, script_score query >>.
1199+
Generally, this means that there will be more rescoring per shard, but this can increase overall recall at the cost of compute.
12151200

12161201
[source,console]
12171202
--------------------------------------------------
@@ -1243,3 +1228,87 @@ POST /my-index/_search
12431228
<3> The number of candidates to use for the initial approximate `knn` search. This will search using the quantized vectors
12441229
and return the top 20 candidates per shard to then be scored
12451230
<4> The script to score the results. Script score will interact directly with the originally provided float32 vector.
1231+
1232+
1233+
[discrete]
1234+
[[exact-knn]]
1235+
=== Exact kNN
1236+
1237+
To run an exact kNN search, use a `script_score` query with a vector function.
1238+
1239+
. Explicitly map one or more `dense_vector` fields. If you don't intend to use
1240+
the field for approximate kNN, set the `index` mapping option to `false`. This
1241+
can significantly improve indexing speed.
1242+
+
1243+
[source,console]
1244+
----
1245+
PUT product-index
1246+
{
1247+
"mappings": {
1248+
"properties": {
1249+
"product-vector": {
1250+
"type": "dense_vector",
1251+
"dims": 5,
1252+
"index": false
1253+
},
1254+
"price": {
1255+
"type": "long"
1256+
}
1257+
}
1258+
}
1259+
}
1260+
----
1261+
1262+
. Index your data.
1263+
+
1264+
[source,console]
1265+
----
1266+
POST product-index/_bulk?refresh=true
1267+
{ "index": { "_id": "1" } }
1268+
{ "product-vector": [230.0, 300.33, -34.8988, 15.555, -200.0], "price": 1599 }
1269+
{ "index": { "_id": "2" } }
1270+
{ "product-vector": [-0.5, 100.0, -13.0, 14.8, -156.0], "price": 799 }
1271+
{ "index": { "_id": "3" } }
1272+
{ "product-vector": [0.5, 111.3, -13.0, 14.8, -156.0], "price": 1099 }
1273+
...
1274+
----
1275+
//TEST[continued]
1276+
//TEST[s/\.\.\.//]
1277+
1278+
. Use the <<search-search,search API>> to run a `script_score` query containing
1279+
a <<vector-functions,vector function>>.
1280+
+
1281+
TIP: To limit the number of matched documents passed to the vector function, we
1282+
recommend you specify a filter query in the `script_score.query` parameter. If
1283+
needed, you can use a <<query-dsl-match-all-query,`match_all` query>> in this
1284+
parameter to match all documents. However, matching all documents can
1285+
significantly increase search latency.
1286+
+
1287+
[source,console]
1288+
----
1289+
POST product-index/_search
1290+
{
1291+
"query": {
1292+
"script_score": {
1293+
"query" : {
1294+
"bool" : {
1295+
"filter" : {
1296+
"range" : {
1297+
"price" : {
1298+
"gte": 1000
1299+
}
1300+
}
1301+
}
1302+
}
1303+
},
1304+
"script": {
1305+
"source": "cosineSimilarity(params.queryVector, 'product-vector') + 1.0",
1306+
"params": {
1307+
"queryVector": [-0.5, 90.0, -10, 14.8, -156.0]
1308+
}
1309+
}
1310+
}
1311+
}
1312+
}
1313+
----
1314+
//TEST[continued]

docs/reference/search/search.asciidoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -534,6 +534,8 @@ not both. Refer to <<knn-semantic-search>> to learn more.
534534
(Optional, float)
535535
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-similarity]
536536
537+
include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=knn-rescore-vector]
538+
537539
====
538540

539541
[[search-api-min-score]]

0 commit comments

Comments
 (0)