Skip to content

Commit 2059a4d

Browse files
authored
Improved context precision documentation (#2266)
## Issue Link / Problem Description <!-- Link to related issue or describe the problem this PR solves --> - Fixes #[issue_number] - OR describe the issue: What problem does this solve? How can it be replicated The existing documentation has a mistake, and in some cases, poor style and grammar or is missing key information. ## Changes Made <!-- Describe what you changed and why --> - Provided separate example for explaining Context Precision as a metric - Corrected the definition of the metric LLMContextPrecisionWithoutReference - Improved Grammar and style - For NonLLMContextPrecisionWithReference, added packages to be installed and also included an example of the distance algorithm used --- <!-- Thank you for contributing to Ragas! Please fill out the sections above as completely as possible. The more information you provide, the faster your PR can be reviewed and merged. -->
1 parent 998c3ba commit 2059a4d

File tree

1 file changed

+33
-10
lines changed

1 file changed

+33
-10
lines changed

docs/concepts/metrics/available_metrics/context_precision.md

Lines changed: 33 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,6 @@ $$
1313

1414
Where $K$ is the total number of chunks in `retrieved_contexts` and $v_k \in \{0, 1\}$ is the relevance indicator at rank $k$.
1515

16-
## LLM Based Context Precision
17-
18-
The following metrics uses LLM to identify if a retrieved context is relevant or not.
19-
20-
### Context Precision without reference
21-
22-
`LLMContextPrecisionWithoutReference` metric can be used when you have both retrieved contexts and also reference answer associated with a `user_input`. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in `retrieved_contexts` with `response`.
23-
2416
#### Example
2517

2618
```python
@@ -87,9 +79,38 @@ Output
8779
0.49999999995
8880
```
8981

82+
## LLM Based Context Precision
83+
84+
The following metrics uses LLM to identify if a retrieved context is relevant or not.
85+
86+
### Context Precision without reference
87+
88+
The `LLMContextPrecisionWithoutReference` metric can be used without the availability of a reference answer. To estimate if the retrieved contexts are relevant, this method uses the LLM to compare each chunk in `retrieved_contexts` with the `response`.
89+
90+
#### Example
91+
92+
```python
93+
from ragas import SingleTurnSample
94+
from ragas.metrics import LLMContextPrecisionWithoutReference
95+
96+
context_precision = LLMContextPrecisionWithoutReference(llm=evaluator_llm)
97+
98+
sample = SingleTurnSample(
99+
user_input="Where is the Eiffel Tower located?",
100+
response="The Eiffel Tower is located in Paris.",
101+
retrieved_contexts=["The Eiffel Tower is located in Paris."],
102+
)
103+
104+
105+
await context_precision.single_turn_ascore(sample)
106+
```
107+
Output
108+
```
109+
0.9999999999
110+
```
90111
### Context Precision with reference
91112

92-
`LLMContextPrecisionWithReference` metric is can be used when you have both retrieved contexts and also reference context associated with a `user_input`. To estimate if a retrieved contexts is relevant or not this method uses the LLM to compare each of the retrieved context or chunk present in `retrieved_contexts` with `reference`.
113+
The `LLMContextPrecisionWithReference` metric can be used when you have both retrieved contexts and also a reference response associated with a `user_input`. To estimate if the retrieved contexts are relevant, this method uses the LLM to compare each chunk in `retrieved_contexts` with the `reference`.
93114

94115
#### Example
95116

@@ -114,12 +135,14 @@ Output
114135

115136
## Non LLM Based Context Precision
116137

117-
This metric uses traditional methods to determine whether a retrieved context is relevant. It relies on non-LLM-based metrics as a distance measure to evaluate the relevance of retrieved contexts.
138+
This metric uses non-LLM-based methods (such as [Levenshtein distance measure](https://en.wikipedia.org/wiki/Levenshtein_distance)) to determine whether a retrieved context is relevant.
118139

119140
### Context Precision with reference contexts
120141

121142
The `NonLLMContextPrecisionWithReference` metric is designed for scenarios where both retrieved contexts and reference contexts are available for a `user_input`. To determine if a retrieved context is relevant, this method compares each retrieved context or chunk in `retrieved_contexts` with every context in `reference_contexts` using a non-LLM-based similarity measure.
122143

144+
Note that this metric would need the rapidfuzz package to be installed: `pip install rapidfuzz`.
145+
123146
#### Example
124147

125148
```python

0 commit comments

Comments
 (0)