diff --git a/examples/README.md b/examples/README.md index d115b8fc..3faea48d 100644 --- a/examples/README.md +++ b/examples/README.md @@ -158,6 +158,8 @@ We support evaluations on [MTEB](https://github.com/embeddings-benchmark/mteb), ```shell pip install pytrec_eval +# if you fail to install pytrec_eval, try the following command +# pip install pytrec-eval-terrier pip install https://github.com/kyamagu/faiss-wheels/releases/download/v1.7.3/faiss_gpu-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl python -m FlagEmbedding.evaluation.msmarco \ --eval_name msmarco \ diff --git a/examples/evaluation/README.md b/examples/evaluation/README.md index 931d2de9..f429ed58 100644 --- a/examples/evaluation/README.md +++ b/examples/evaluation/README.md @@ -105,6 +105,8 @@ You need install `pytrec_eval` and `faiss` for evaluation: ```shell pip install pytrec_eval +# if you fail to install pytrec_eval, try the following command +# pip install pytrec-eval-terrier pip install https://github.com/kyamagu/faiss-wheels/releases/download/v1.7.3/faiss_gpu-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl ```