Skip to content

Commit ddc5d76

Browse files
jjmachanJithin James
andauthored
fix: some fixes and readme (#26)
* added info about data * fix some errors * fix emojis * fix blue * char * quickstart * download spacy if not found * finish quickstart * fix linting issues * update badges --------- Co-authored-by: Jithin James <[email protected]>
1 parent fb17f9d commit ddc5d76

File tree

5 files changed

+223
-675
lines changed

5 files changed

+223
-675
lines changed

README.md

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,36 +7,37 @@
77
</p>
88

99
<p align="center">
10-
<a href="https://github.com/beir-cellar/beir/releases">
11-
<img alt="GitHub release" src="https://img.shields.io/github/release/beir-cellar/beir.svg">
10+
<a href="https://github.com/explodinggradients/ragas/releases">
11+
<img alt="GitHub release" src="https://img.shields.io/github/release/explodinggradients/ragas.svg">
1212
</a>
1313
<a href="https://www.python.org/">
1414
<img alt="Build" src="https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple">
1515
</a>
16-
<a href="https://github.com/beir-cellar/beir/blob/master/LICENSE">
17-
<img alt="License" src="https://img.shields.io/github/license/beir-cellar/beir.svg?color=green">
16+
<a href="https://github.com/explodinggradients/ragas/blob/master/LICENSE">
17+
<img alt="License" src="https://img.shields.io/github/license/explodinggradients/ragas.svg?color=green">
1818
</a>
1919
<a href="https://colab.research.google.com/drive/1HfutiEhHMJLXiWGT8pcipxT5L2TpYEdt?usp=sharing">
2020
<img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg">
2121
</a>
22-
<a href="https://github.com/beir-cellar/beir/">
22+
<a href="https://github.com/explodinggradients/ragas/">
2323
<img alt="Downloads" src="https://badges.frapsoft.com/os/v1/open-source.svg?v=103">
2424
</a>
2525
</p>
2626

2727
<h4 align="center">
2828
<p>
29-
<a href="#beers-installation">Installation</a> |
30-
<a href="#beers-quick-example">Quick Example</a> |
31-
<a href="https://huggingface.co/BeIR">Hugging Face</a>
29+
<a href="#Installation">Installation</a> |
30+
<a href="#quickstart">Quick Example</a> |
31+
<a href="#metrics">Metrics List</a> |
32+
<a href="https://huggingface.co/explodinggradients">Hugging Face</a>
3233
<p>
3334
</h4>
3435

3536
ragas is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. RAG denotes a class of LLM applications that use external data to augment the LLM’s context. There are existing tools and frameworks that help you build these pipelines but evaluating it and quantifying your pipeline performance can be hard.. This is were ragas (RAG Assessment) comes in
3637

3738
ragas provides you with the tools based on the latest research for evaluating LLM generated text to give you insights about your RAG pipeline. ragas can be integrated with your CI/CD to provide continuous check to ensure performance.
3839

39-
## Installation 🛡
40+
## 🛡 Installation
4041

4142
```bash
4243
pip install ragas
@@ -47,7 +48,7 @@ git clone https://github.com/explodinggradients/ragas && cd ragas
4748
pip install -e .
4849
```
4950

50-
## Quickstart 🔥
51+
## 🔥 Quickstart
5152

5253
This is a small example program you can run to see ragas in action!
5354
```python
@@ -74,11 +75,13 @@ e = Evaluation(
7475
results = e.eval(ds["ground_truth"], ds["generated_text"])
7576
print(results)
7677
```
77-
If you want a more in-depth explanation of core components, check out our quick-start notebook
78+
If you want a more in-depth explanation of core components, check out our [quick-start notebook](./examples/quickstart.ipynb)
7879
## 🧰 Metrics
7980

8081
### ✏️ Character based
8182

83+
Character based metrics focus on analyzing text at the character level.
84+
8285
- **Levenshtein distance** the number of single character edits (additional, insertion, deletion) required to change your generated text to ground truth text.
8386
- **Levenshtein** **ratio** is obtained by dividing the Levenshtein distance by sum of number of characters in generated text and ground truth. This type of metrics is suitable where one works with short and precise texts.
8487

@@ -92,7 +95,7 @@ N-gram based metrics as name indicates uses n-grams for comparing generated answ
9295

9396
- **BLEU** (BiLingual Evaluation Understudy)
9497

95-
It measures precision by comparing  clipped n-grams in generated text to ground truth text. These matches do not consider the ordering of words.
98+
It measures precision by comparing  clipped n-grams in generated text to ground truth text. These matches do not consider the ordering of words.
9699

97100
### 🪄 Model Based
98101

0 commit comments

Comments
 (0)