|
115 | 115 | "\n", |
116 | 116 | "Ragas provides you with a few metrics to evaluate the different aspects of your RAG systems namely\n", |
117 | 117 | "\n", |
118 | | - "1. metrics to evaluate retrieval: offers `context_relevancy` and `context_recall` which give you the measure of the performance of your retrieval system. \n", |
| 118 | + "1. metrics to evaluate retrieval: offers `context_precision` and `context_recall` which give you the measure of the performance of your retrieval system. \n", |
119 | 119 | "2. metrics to evaluate generation: offers `faithfulness` which measures hallucinations and `answer_relevancy` which measures how to-the-point the answers are to the question.\n", |
120 | 120 | "\n", |
121 | 121 | "The harmonic mean of these 4 aspects gives you the **ragas score** which is a single measure of the performance of your QA system across all the important aspects.\n", |
|
126 | 126 | "\n", |
127 | 127 | "1. **Faithfulness**: measures the information consistency of the generated answer against the given context. If any claims are made in the answer that cannot be deduced from context is penalized. It is calculated from `answer` and `retrieved context`.\n", |
128 | 128 | "\n", |
129 | | - "2. **Context Relevancy**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized. It is calculated from `question` and `retrieved context`.\n", |
| 129 | + "2. **Context Precision**: measures how relevant retrieved contexts are to the question. Ideally, the context should only contain information necessary to answer the question. The presence of redundant information in the context is penalized. It is calculated from `question` and `retrieved context`.\n", |
130 | 130 | "\n", |
131 | 131 | "3. **Context Recall**: measures the recall of the retrieved context using annotated answer as ground truth. Annotated answer is taken as proxy for ground truth context. It is calculated from `ground truth` and `retrieved context`.\n", |
132 | 132 | "\n", |
|
183 | 183 | "outputs": [], |
184 | 184 | "source": [ |
185 | 185 | "from ragas.metrics import (\n", |
186 | | - " context_relevancy,\n", |
| 186 | + " context_precision,\n", |
187 | 187 | " answer_relevancy,\n", |
188 | 188 | " faithfulness,\n", |
189 | 189 | " context_recall,\n", |
|
193 | 193 | "# list of metrics we're going to use\n", |
194 | 194 | "metrics = [\n", |
195 | 195 | " faithfulness,\n", |
196 | | - " answer_relevancy\n", |
| 196 | + " answer_relevancy,\n", |
197 | 197 | " context_recall,\n", |
198 | | - " context_relevancy,\n", |
| 198 | + " context_precision,\n", |
199 | 199 | " harmfulness,\n", |
200 | 200 | "]" |
201 | 201 | ] |
|
0 commit comments