Skip to content

Commit 88599be

Browse files
authored
docs: correct docs for answer relevancy (#86)
1 parent 154902d commit 88599be

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/metrics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ results = context_rel.score(dataset)
3535
This measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better.
3636
```python
3737
from ragas.metrics.answer_relevancy import AnswerRelevancy
38-
answer_relevancy = AnswerRelevancy(model_name="t5-small")
38+
answer_relevancy = AnswerRelevancy()
3939
# Dataset({
4040
# features: ['question','answer'],
4141
# num_rows: 25

0 commit comments

Comments
 (0)