Skip to content

Commit f7cee83

Browse files
authored
Merge pull request #1535 from hanhainebula/master
update examples README: support BRIGHT evaluation
2 parents 971236c + 5a5dd62 commit f7cee83

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/evaluation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Evaluation
22

3-
After fine-tuning the model, it is essential to evaluate its performance. To facilitate this process, we have provided scripts for assessing the model on various datasets. These datasets include: [**MTEB**](https://github.com/embeddings-benchmark/mteb), [**BEIR**](https://github.com/beir-cellar/beir), [**MSMARCO**](https://microsoft.github.io/msmarco/), [**MIRACL**](https://github.com/project-miracl/miracl), [**MLDR**](https://huggingface.co/datasets/Shitao/MLDR), [**MKQA**](https://github.com/apple/ml-mkqa), [**AIR-Bench**](https://github.com/AIR-Bench/AIR-Bench), and your **custom datasets**.
3+
After fine-tuning the model, it is essential to evaluate its performance. To facilitate this process, we have provided scripts for assessing the model on various datasets. These datasets include: [**MTEB**](https://github.com/embeddings-benchmark/mteb), [**BEIR**](https://github.com/beir-cellar/beir), [**MSMARCO**](https://microsoft.github.io/msmarco/), [**MIRACL**](https://github.com/project-miracl/miracl), [**MLDR**](https://huggingface.co/datasets/Shitao/MLDR), [**MKQA**](https://github.com/apple/ml-mkqa), [**AIR-Bench**](https://github.com/AIR-Bench/AIR-Bench), [**BRIGHT**](https://brightbenchmark.github.io/), and your **custom datasets**.
44

55
To evaluate the model on a specific dataset, you can find the corresponding bash scripts in the respective folders dedicated to each dataset. These scripts contain the necessary commands and configurations to run the evaluation process.
66

0 commit comments

Comments
 (0)