|
2 | 2 |
|
3 | 3 | 1. `faithfulness` : measures the factual consistency of the generated answer against the given context. This is done using a multi step paradigm that includes creation of statements from the generated answer followed by verifying each of these statements against the context. The answer is scaled to (0,1) range. Higher the better. |
4 | 4 | ```python |
5 | | -from ragas.metrics import faithfulness |
| 5 | +from ragas.metrics.factuality import Faithfulness |
| 6 | +faithfulness = Faithfulness() |
| 7 | + |
6 | 8 | # Dataset({ |
7 | 9 | # features: ['question','contexts','answer'], |
8 | 10 | # num_rows: 25 |
9 | 11 | # }) |
10 | 12 | dataset: Dataset |
11 | 13 |
|
12 | | -results = evaluate(dataset, metrics=[faithfulness]) |
| 14 | +results = faithfulness.score(dataset) |
13 | 15 | ``` |
14 | | -2. `answer_relevancy`: measures how relevant is the generated answer to the prompt. This is quantified using conditional likelihood of an LLM generating the question given the answer. This is implemented using a custom model. Values range (0,1), higher the better. |
| 16 | + |
| 17 | +2. `context_relevancy`: measures how relevant is the retrieved context to the prompt. This is done using a combination of OpenAI models and cross-encoder models. To improve the score one can try to optimize the amount of information present in the retrieved context. |
15 | 18 | ```python |
16 | | -from ragas.metrics import answer_relevancy |
| 19 | +from ragas.metrics.context_relevancy import ContextRelevancy |
| 20 | +context_rel = ContextRelevancy(strictness=3) |
17 | 21 | # Dataset({ |
18 | | -# features: ['question','answer'], |
| 22 | +# features: ['question','contexts'], |
19 | 23 | # num_rows: 25 |
20 | 24 | # }) |
21 | 25 | dataset: Dataset |
22 | 26 |
|
23 | | -results = evaluate(dataset, metrics=[answer_relevancy]) |
| 27 | +results = context_rel.score(dataset) |
24 | 28 | ``` |
25 | 29 |
|
26 | | -3. `context_relevancy`: measures how relevant is the retrieved context to the prompt. This is quantified using a custom trained cross encoder model. Values range (0,1), higher the better. |
| 30 | +3. `answer_relevancy`: measures how relevant is the generated answer to the prompt. This is quantified using conditional likelihood of an LLM generating the question given the answer. This is implemented using a custom model. Values range (0,1), higher the better. |
27 | 31 | ```python |
28 | | -from ragas.metrics import context_relevancy |
| 32 | +from ragas.metrics.answer_relevancy import AnswerRelevancy |
| 33 | +answer_relevancy = AnswerRelevancy(model_name="t5-small") |
29 | 34 | # Dataset({ |
30 | | -# features: ['question','contexts'], |
| 35 | +# features: ['question','answer'], |
31 | 36 | # num_rows: 25 |
32 | 37 | # }) |
33 | 38 | dataset: Dataset |
34 | 39 |
|
35 | | -results = evaluate(dataset, metrics=[context_relevancy]) |
| 40 | +results = answer_relevancy.score(dataset) |
36 | 41 | ``` |
| 42 | + |
| 43 | + |
37 | 44 | ## Why is ragas better than scoring using GPT 3.5 directly. |
38 | 45 | LLM like GPT 3.5 struggle when it comes to scoring generated text directly. For instance, these models would always only generate integer scores and these scores vary when invoked differently. Advanced paradigms and techniques leveraging LLMs to minimize this bias is the solution ragas presents. |
39 | 46 | <h1 align="center"> |
|
0 commit comments