You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/search/vector/knn.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1109,7 +1109,11 @@ All forms of quantization will result in some accuracy loss and as the quantizat
1109
1109
*`int4` requires some rescoring for higher accuracy and larger recall scenarios. Generally, oversampling by 1.5x-2x recovers most of the accuracy loss.
1110
1110
*`bbq` requires rescoring except on exceptionally large indices or models specifically designed for quantization. We have found that between 3x-5x oversampling is generally sufficient. But for fewer dimensions or vectors that do not quantize well, higher oversampling may be required.
1111
1111
1112
-
You can use the `rescore_vector`[preview] option to automatically perform reranking. When a rescore `oversample` parameter is specified, the approximate kNN search will:
1112
+
#### The `rescore_vector` option
1113
+
```{applies_to}
1114
+
stack: preview 9.0, ga 9.1
1115
+
```
1116
+
You can use the `rescore_vector` option to automatically perform reranking. When a rescore `oversample` parameter is specified, the approximate kNN search will:
1113
1117
1114
1118
* Retrieve `num_candidates` candidates per shard.
1115
1119
* From these candidates, the top `k * oversample` candidates per shard will be rescored using the original vectors.
0 commit comments