|
15 | 15 | " <h1> Quickstart </h1>\n", |
16 | 16 | "</p>\n", |
17 | 17 | "\n", |
18 | | - "welcome to the ragas quickstart. We're going to get you up and running with ragas as qickly as you can so that you can go back to improving your Retrieval Augmented Generation pipelines while this library makes sure your changes are improving your entire pipeline.\n", |
| 18 | + "Welcome to the ragas quickstart. We're going to get you up and running with ragas as quickly as you can so that you can go back to improving your Retrieval Augmented Generation pipelines while this library makes sure your changes are improving your entire pipeline.\n", |
19 | 19 | "\n", |
20 | 20 | "to kick things of lets start with the data\n", |
21 | 21 | "\n", |
|
62 | 62 | "\n", |
63 | 63 | "Ragas performs a `ground_truth` free evaluation of your RAG pipelines. This is because for most people building a gold labeled dataset which represents in the distribution they get in production is a very expensive process.\n", |
64 | 64 | "\n", |
65 | | - "**Note:** *While originially ragas was aimed at `ground_truth` free evalutions there is some aspects of the RAG pipeline that need `ground_truth` in order to measure. We're in the process of building a testset generation features that will make it easier. Checkout [issue#136](https://github.com/explodinggradients/ragas/issues/136) for more details.*\n", |
| 65 | + "**Note:** *While originally ragas was aimed at `ground_truth` free evaluations there is some aspects of the RAG pipeline that need `ground_truth` in order to measure. We're in the process of building a testset generation features that will make it easier. Checkout [issue#136](https://github.com/explodinggradients/ragas/issues/136) for more details.*\n", |
66 | 66 | "\n", |
67 | 67 | "Hence to work with ragas all you need are the following data\n", |
68 | 68 | "- question: `list[str]` - These are the questions you RAG pipeline will be evaluated on. \n", |
69 | 69 | "- answer: `list[str]` - The answer generated from the RAG pipeline and give to the user.\n", |
70 | 70 | "- contexts: `list[list[str]]` - The contexts which where passed into the LLM to answer the question.\n", |
71 | 71 | "- ground_truths: `list[list[str]]` - The ground truth answer to the questions. (only required if you are using context_recall)\n", |
72 | 72 | "\n", |
73 | | - "Ideally your list of questions should reflect the questions your users give, including those that you have been problamatic in the past.\n", |
| 73 | + "Ideally your list of questions should reflect the questions your users give, including those that you have been problematic in the past.\n", |
74 | 74 | "\n", |
75 | | - "Here we're using an example dataset from on of the baselines we created for the [Financial Opinion Mining and Question Answering (fiqa) Dataset](https://sites.google.com/view/fiqa/) we created. If you want to want to know more about the baseline, feel free to check the `experiements/baseline` section" |
| 75 | + "Here we're using an example dataset from on of the baselines we created for the [Financial Opinion Mining and Question Answering (fiqa) Dataset](https://sites.google.com/view/fiqa/) we created. If you want to want to know more about the baseline, feel free to check the `experiments/baseline` section" |
76 | 76 | ] |
77 | 77 | }, |
78 | 78 | { |
|
167 | 167 | "here you can see that we are using 4 metrics, but what do the represent?\n", |
168 | 168 | "\n", |
169 | 169 | "1. context_precision - a measure of how relevant the retrieved context is to the question. Conveys quality of the retrieval pipeline.\n", |
170 | | - "2. answer_relevancy - a measure of how relevent the answer is to the question\n", |
171 | | - "3. faithfulness - the factual consistancy of the answer to the context base on the question.\n", |
| 170 | + "2. answer_relevancy - a measure of how relevant the answer is to the question\n", |
| 171 | + "3. faithfulness - the factual consistency of the answer to the context base on the question.\n", |
172 | 172 | "4. context_recall: measures the ability of the retriever to retrieve all the necessary information needed to answer the question. \n", |
173 | | - "5. harmfulness (AspectCritique) - in general, `AspectCritique` is a metric that can be used to quantify various aspects of the answer. Aspects like harmfulness, maliciousness, coherence, correctness, concisenes are available by default but you can easily define your own. Check the [docs](./metrics.md) for more info.\n", |
| 173 | + "5. harmfulness (AspectCritique) - in general, `AspectCritique` is a metric that can be used to quantify various aspects of the answer. Aspects like harmfulness, maliciousness, coherence, correctness, conciseness are available by default but you can easily define your own. Check the [docs](./metrics.md) for more info.\n", |
174 | 174 | "\n", |
175 | 175 | "**Note:** *by default these metrics are using OpenAI's API to compute the score. If you using this metric make sure you set the environment key `OPENAI_API_KEY` with your API key. You can also try other LLMs for evaluation, check the [llm guide](./guides/llms.ipynb) to learn more*\n", |
176 | 176 | "\n", |
|
184 | 184 | "source": [ |
185 | 185 | "## Evaluation\n", |
186 | 186 | "\n", |
187 | | - "Running the evalutation is as simple as calling evaluate on the `Dataset` with the metrics of your choice." |
| 187 | + "Running the evaluation is as simple as calling evaluate on the `Dataset` with the metrics of your choice." |
188 | 188 | ] |
189 | 189 | }, |
190 | 190 | { |
|
0 commit comments