Skip to content

Commit 66e132b

Browse files
committed
repo to PyPi
1 parent 7781b95 commit 66e132b

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

qdrant-landing/content/articles/relevance-feedback.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -419,7 +419,7 @@ For example, Relevance Feedback Query can be a great aid for search agents, lett
419419

420420
### How to Use It
421421

422-
For ease of use, as one shouldn't need a machine learning degree to use a new feature, we published a [Python package that customizes Naive Formula weights for your dataset, retriever, and feedback model](https://github.com/qdrant/relevance-feedback).
422+
For ease of use, as one shouldn't need a machine learning degree to use a new feature, we published a [Python package that customizes Naive Formula weights for your dataset, retriever, and feedback model](https://pypi.org/project/qdrant-relevance-feedback/).
423423

424424
What you need is a Qdrant collection, an idea of which feedback model you'd like to use to guide your retriever, and, optionally, a small set of use case-specific queries (50–300).
425425

@@ -438,7 +438,7 @@ Once you've obtained the weights, simply plug them into your [Qdrant Client of c
438438

439439
### Evaluating Your Gains
440440

441-
Additionally, the [Relevance Feedback Parameters package](https://github.com/qdrant/relevance-feedback) provides an `Evaluator` module with two metrics: **relative gain** based on the **abovethreshold@N** metric from the "Experiments" section above, and a metric more recognizable to people in search -- **Discounted Cumulative Gain (DCG) Win Rate**.
441+
Additionally, the [Relevance Feedback Parameters package](https://pypi.org/project/qdrant-relevance-feedback/) provides an `Evaluator` module with two metrics: **relative gain** based on the **abovethreshold@N** metric from the "Experiments" section above, and a metric more recognizable to people in search -- **Discounted Cumulative Gain (DCG) Win Rate**.
442442

443443
- **Discounted Cumulative Gain (DCG) Win Rate**
444444
For each query, we compute DCG@N for both compared methods (vanilla and relevance feedback-based retrieval) against ground truth relevancy scores from a feedback model. The method with the higher DCG@N gets a "win".

qdrant-landing/content/documentation/concepts/search-relevance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ To leverage the feedback in search across the entire collection, Qdrant provides
193193

194194
Internally, Qdrant combines the feedback list into pairs, based on the relevance scores, and then uses these pairs in a formula that modifies vector space traversal during retrieval (changes the strategy of retrieval). This relevance feedback-based retrieval considers not only the similarity of candidates to the query but also to each feedback pair. For a more detailed description of how it works, refer to the article [Relevance Feedback in Qdrant](/articles/relevance-feedback).
195195

196-
The `a`, `b`, and `c` parameters of the [`naive` strategy](#naive-strategy) need to be customized for each triplet of retriever, feedback model, and collection. To get these 3 weights adapted to your setup, use [our open source Python package](https://github.com/qdrant/relevance-feedback).
196+
The `a`, `b`, and `c` parameters of the [`naive` strategy](#naive-strategy) need to be customized for each triplet of retriever, feedback model, and collection. To get these 3 weights adapted to your setup, use [our open source Python package](https://pypi.org/project/qdrant-relevance-feedback/).
197197

198198
<aside role="alert">When using point IDs for <code>target</code> or <code>example</code>, these points are excluded from the search results. To include them, convert them to raw vectors first and use the raw vectors in the query.</aside>
199199

0 commit comments

Comments
 (0)