Skip to content

Commit 6548f3c

Browse files
authored
Update Readme (#31)
* fix sim calculation * added new emojis * rename to references * rrmv citations
1 parent cf3ad78 commit 6548f3c

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@
2626

2727
<h4 align="center">
2828
<p>
29-
<a href="#beers-installation">Installation</a> |
30-
<a href="#beers-quickstart">Quickstart</a> |
31-
<a href="#beers-metrics">Metrics</a> |
29+
<a href="#shield-installation">Installation</a> |
30+
<a href="#fire-quickstart">Quickstart</a> |
31+
<a href="#luggage-metrics">Metrics</a> |
3232
<a href="https://huggingface.co/explodinggradients">Hugging Face</a>
3333
<p>
3434
</h4>
@@ -37,7 +37,7 @@ ragas is a framework that helps you evaluate your Retrieval Augmented Generation
3737

3838
ragas provides you with the tools based on the latest research for evaluating LLM generated text to give you insights about your RAG pipeline. ragas can be integrated with your CI/CD to provide continuous check to ensure performance.
3939

40-
## :beers: Installation
40+
## :shield: Installation
4141

4242
```bash
4343
pip install ragas
@@ -48,7 +48,7 @@ git clone https://github.com/explodinggradients/ragas && cd ragas
4848
pip install -e .
4949
```
5050

51-
## :beers: Quickstart
51+
## :fire: Quickstart
5252

5353
This is a small example program you can run to see ragas in action!
5454
```python
@@ -76,14 +76,14 @@ results = e.eval(ds["ground_truth"], ds["generated_text"])
7676
print(results)
7777
```
7878
If you want a more in-depth explanation of core components, check out our quick-start notebook
79-
## :beers: Metrics
79+
## :luggage: Metrics
8080

81-
### ✏️ Character based
81+
### :3rd_place_medal: Character based
8282

8383
- **Levenshtein distance** the number of single character edits (additional, insertion, deletion) required to change your generated text to ground truth text.
8484
- **Levenshtein** **ratio** is obtained by dividing the Levenshtein distance by sum of number of characters in generated text and ground truth. This type of metrics is suitable where one works with short and precise texts.
8585

86-
### 🖊 N-Gram based
86+
### :2nd_place_medal: N-Gram based
8787

8888
N-gram based metrics as name indicates uses n-grams for comparing generated answer with ground truth. It is suitable to extractive and abstractive tasks but has its limitations in long free form answers due to the word based comparison.
8989

@@ -95,7 +95,7 @@ N-gram based metrics as name indicates uses n-grams for comparing generated answ
9595

9696
It measures precision by comparing  clipped n-grams in generated text to ground truth text. These matches do not consider the ordering of words.
9797

98-
### 🪄 Model Based
98+
### :1st_place_medal: Model Based
9999

100100
Model based methods uses language models combined with NLP techniques to compare generated text with ground truth. It is well suited for free form long or short answer types.
101101

@@ -111,7 +111,7 @@ Model based methods uses language models combined with NLP techniques to compare
111111

112112
- **$Q^2$**
113113

114-
Best used to measure factual consistencies between ground truth and generated text. Scores can range from 0 to 1. Higher score indicates better factual consistency between ground truth and generated answer. Employs QA-QG paradigm followed by NLI to compare ground truth and generated answer. $Q^2$ score is highly correlated with human judgement.
114+
Best used to measure factual consistencies between ground truth and generated text. Scores can range from 0 to 1. Higher score indicates better factual consistency between ground truth and generated answer. Employs QA-QG paradigm followed by NLI to compare ground truth and generated answer. $Q^2$ score is highly correlated with human judgement. :warning: time and resource hungry metrics.
115115

116-
📜 Checkout [citations](./citations.md) for related publications.
116+
📜 Checkout [citations](./references.md) for related publications.
117117

File renamed without changes.

0 commit comments

Comments
 (0)