You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/search/search-your-data/knn-search.asciidoc
+15-17Lines changed: 15 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1090,19 +1090,7 @@ Generally, we have found that:
1090
1090
* `int4` requires some rescoring for higher accuracy and larger recall scenarios. Generally, oversampling by 1.5x-2x recovers most of the accuracy loss.
1091
1091
* `bbq` requires rescoring except on exceptionally large indices or models specifically designed for quantization. We have found that between 3x-5x oversampling is generally sufficient. But for fewer dimensions or vectors that do not quantize well, higher oversampling may be required.
1092
1092
1093
-
There are three main ways to oversample and rescore:
===== Use the `rescore_vector` option to rescore per shard
1102
-
1103
-
preview:[]
1104
-
1105
-
You can use the `rescore_vector` option to automatically perform reranking.
1093
+
You can use the `rescore_vector` preview:[] option to automatically perform reranking.
1106
1094
When a rescore `num_candidates_factor` parameter is specified, the approximate kNN search will retrieve the top `num_candidates * oversample` candidates per shard.
1107
1095
It will then use the original vectors to rescore them, and return the top `k` results.
1108
1096
@@ -1134,12 +1122,19 @@ This example will:
1134
1122
* Rescore the top 200 candidates per shard using the original, non quantized vectors.
1135
1123
* Merge the rescored canddidates from all shards, and return the top 10 (`k`) results.
0 commit comments