You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Issue Link / Problem Description
<!-- Link to related issue or describe the problem this PR solves -->
- Fixes #[issue_number]
- OR describe the issue: What problem does this solve? How can it be
replicated
The existing documentation has a mistake, and in some cases, poor style
and grammar or is missing key information.
## Changes Made
<!-- Describe what you changed and why -->
- Provided separate example for explaining Context Precision as a metric
- Corrected the definition of the metric
LLMContextPrecisionWithoutReference
- Improved Grammar and style
- For NonLLMContextPrecisionWithReference, added packages to be
installed and also included an example of the distance algorithm used
---
<!--
Thank you for contributing to Ragas!
Please fill out the sections above as completely as possible.
The more information you provide, the faster your PR can be reviewed and
merged.
-->
Copy file name to clipboardExpand all lines: docs/concepts/metrics/available_metrics/context_precision.md
+33-10Lines changed: 33 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,14 +13,6 @@ $$
13
13
14
14
Where $K$ is the total number of chunks in `retrieved_contexts` and $v_k \in \{0, 1\}$ is the relevance indicator at rank $k$.
15
15
16
-
## LLM Based Context Precision
17
-
18
-
The following metrics uses LLM to identify if a retrieved context is relevant or not.
19
-
20
-
### Context Precision without reference
21
-
22
-
`LLMContextPrecisionWithoutReference` metric can be used when you have both retrieved contexts and also reference answer associated with a `user_input`. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in `retrieved_contexts` with `response`.
23
-
24
16
#### Example
25
17
26
18
```python
@@ -87,9 +79,38 @@ Output
87
79
0.49999999995
88
80
```
89
81
82
+
## LLM Based Context Precision
83
+
84
+
The following metrics uses LLM to identify if a retrieved context is relevant or not.
85
+
86
+
### Context Precision without reference
87
+
88
+
The `LLMContextPrecisionWithoutReference` metric can be used without the availability of a reference answer. To estimate if the retrieved contexts are relevant, this method uses the LLM to compare each chunk in `retrieved_contexts` with the `response`.
89
+
90
+
#### Example
91
+
92
+
```python
93
+
from ragas import SingleTurnSample
94
+
from ragas.metrics import LLMContextPrecisionWithoutReference
`LLMContextPrecisionWithReference` metric is can be used when you have both retrieved contexts and also reference context associated with a `user_input`. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in `retrieved_contexts` with `reference`.
113
+
The `LLMContextPrecisionWithReference` metric can be used when you have both retrieved contexts and also a reference response associated with a `user_input`. To estimate if the retrieved contexts are relevant, this method uses the LLM to compare each chunk in `retrieved_contexts` with the `reference`.
93
114
94
115
#### Example
95
116
@@ -114,12 +135,14 @@ Output
114
135
115
136
## Non LLM Based Context Precision
116
137
117
-
This metric uses traditional methods to determine whether a retrieved context is relevant. It relies on non-LLM-based metrics as a distance measure to evaluate the relevance of retrieved contexts.
138
+
This metric uses non-LLM-based methods (such as [Levenshtein distance measure](https://en.wikipedia.org/wiki/Levenshtein_distance)) to determine whether a retrieved context is relevant.
118
139
119
140
### Context Precision with reference contexts
120
141
121
142
The `NonLLMContextPrecisionWithReference` metric is designed for scenarios where both retrieved contexts and reference contexts are available for a `user_input`. To determine if a retrieved context is relevant, this method compares each retrieved context or chunk in `retrieved_contexts` with every context in `reference_contexts` using a non-LLM-based similarity measure.
122
143
144
+
Note that this metric would need the rapidfuzz package to be installed: `pip install rapidfuzz`.
0 commit comments