You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now lets init the `TestsetGenerator` object with the corresponding generator and critic llms
20
+
Now lets init the `TestsetGenerator` object with the corresponding generator and critic llms
19
21
20
22
21
23
```python
@@ -171,7 +173,7 @@ Now that we have a `QueryEngine` for the `VectorStoreIndex` we can use the llama
171
173
In order to run an evaluation with Ragas and LlamaIndex you need 3 things
172
174
173
175
1. LlamaIndex `QueryEngine`: what we will be evaluating
174
-
2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://github.com/explodinggradients/ragas/blob/main/docs/metrics.md)
176
+
2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/)
175
177
3. Questions: A list of questions that ragas will test the `QueryEngine` against.
176
178
177
179
first lets generate the questions. Ideally you should use that you see in production so that the distribution of question with which we evaluate matches the distribution of questions seen in production. This ensures that the scores reflect the performance seen in production but to start off we'll be using a few example question.
"In order to run an evaluation with Ragas and LlamaIndex you need 3 things\n",
299
308
"\n",
300
309
"1. LlamaIndex `QueryEngine`: what we will be evaluating\n",
301
-
"2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://github.com/explodinggradients/ragas/blob/main/docs/metrics.md)\n",
310
+
"2. Metrics: Ragas defines a set of metrics that can measure different aspects of the `QueryEngine`. The available metrics and their meaning can be found [here](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/)\n",
302
311
"3. Questions: A list of questions that ragas will test the `QueryEngine` against. "
0 commit comments