You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/concepts/metrics/faithfulness.md
+21Lines changed: 21 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,3 +58,24 @@ Let's examine how faithfulness was calculated using the low faithfulness answer:
58
58
```
59
59
60
60
61
+
## Faithfullness with HHEM 2.1 Model
62
+
63
+
[Vectara's HHEM 2.1](https://vectara.com/blog/hhem-2-1-a-better-hallucination-detection-model/) is a classifier model (T5) that is trained to detect halluccinations from LLM generated text. This model can be used in second step of calculating faithfullness, ie when claims are cross-checked with the given context to determine if it can be inferred from the context. The model is free, small and opensource making it very effient to use in production use-cases. To use the model to calculate faithfulness, you can use the following code snippet:
64
+
65
+
```{code-block} python
66
+
from datasets import Dataset
67
+
from ragas.metrics import FaithulnesswithHHEM
68
+
from ragas import evaluate
69
+
70
+
faithfulness_with_hhem = FaithulnesswithHHEM()
71
+
data_samples = {
72
+
'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
73
+
'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
74
+
'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'],
75
+
['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
0 commit comments