|
16 | 16 | <a href="https://github.com/explodinggradients/ragas/blob/master/LICENSE"> |
17 | 17 | <img alt="License" src="https://img.shields.io/github/license/explodinggradients/ragas.svg?color=green"> |
18 | 18 | </a> |
19 | | - <a href="https://colab.research.google.com/drive/1HfutiEhHMJLXiWGT8pcipxT5L2TpYEdt?usp=sharing"> |
| 19 | + <a href="https://colab.research.google.com/github/explodinggradients/ragas/blob/main/examples/quickstart.ipynb"> |
20 | 20 | <img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"> |
21 | 21 | </a> |
| 22 | + <a href="https://discord.gg/5djav8GGNZ"> |
| 23 | + <img alt="discord-invite" src="https://dcbadge.vercel.app/api/server/5djav8GGNZ?style=flat"> |
| 24 | + </a> |
22 | 25 | <a href="https://github.com/explodinggradients/ragas/"> |
23 | 26 | <img alt="Downloads" src="https://badges.frapsoft.com/os/v1/open-source.svg?v=103"> |
24 | 27 | </a> |
|
29 | 32 | <a href="#shield-installation">Installation</a> | |
30 | 33 | <a href="#fire-quickstart">Quickstart</a> | |
31 | 34 | <a href="#luggage-metrics">Metrics</a> | |
| 35 | + <a href="#-community">Community</a> | |
32 | 36 | <a href="#raising_hand_man-faq">FAQ</a> | |
33 | 37 | <a href="https://huggingface.co/explodinggradients">Hugging Face</a> |
34 | 38 | <p> |
@@ -91,12 +95,15 @@ Here we assume that you already have your RAG pipeline ready. When it comes to R |
91 | 95 | 2. Collect a set of sample prompts (min 20) to form your test set. |
92 | 96 | 3. Run your pipeline using the test set before and after the change. Each time record the prompts with context and generated output. |
93 | 97 | 4. Run ragas evaluation for each of them to generate evaluation scores. |
94 | | -5. Compare the scores and you will know how much the change has affected your pipelines's performance. |
| 98 | +5. Compare the scores and you will know how much the change has affected your pipelines' performance. |
95 | 99 |
|
| 100 | +## 🫂 Community |
| 101 | +If you want to get more involved with Ragas, check out our [discord server](https://discord.gg/5djav8GGNZ). It's a fun community where we geek out about LLM, Retrieval, Production issues and more. |
96 | 102 |
|
97 | 103 | ## :raising_hand_man: FAQ |
98 | 104 | 1. Why harmonic mean? |
99 | | -Harmonic mean penalizes extreme values. For example if your generated answer is fully factually consistent with the context (factuality = 1) but is not relevant to the question (relevancy = 0), simple average would give you a score of 0.5 but harmonic mean will give you 0.0 |
| 105 | + |
| 106 | +Harmonic mean penalizes extreme values. For example, if your generated answer is fully factually consistent with the context (factuality = 1) but is not relevant to the question (relevancy = 0), a simple average would give you a score of 0.5 but a harmonic mean will give you 0.0 |
100 | 107 |
|
101 | 108 |
|
102 | 109 |
|
|
0 commit comments