You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+25-8Lines changed: 25 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,9 @@ The goal of this repo is to evaluate CLIP-like models on a standard set
5
5
of datasets on different tasks such as zero-shot classification and zero-shot
6
6
retrieval, and captioning.
7
7
8
-
Below we show the average rank (1 is the best, lower is better) of different CLIP models, evaluated
9
-
on different datasets.
10
-
11
-

12
-
13
-
The current detailed results of the benchmark can be seen [here](benchmark/README.md)
14
-
or directly in the [notebook](benchmark/results.ipynb).
8
+
- Results of OpenCLIP models on 38 datasets can be seen here: <https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv>.
9
+
- Results from "Reproducible scaling laws for contrastive language-image learning" [[arXiv]](https://arxiv.org/abs/2212.07143) can be seen here: <https://github.com/LAION-AI/scaling-laws-openclip>
10
+
- Additional results are available in <https://github.com/LAION-AI/CLIP_benchmark_results>.
- Thanks to [@teasgen](https://github.com/teasgen) for support of validation set and tuning linear probing similar to OpenAI's CLIP.
412
408
- Thanks to [@visheratin](https://github.com/visheratin) for multilingual retrieval datasets support from <https://arxiv.org/abs/2309.01859>.
413
409
- This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) project template. Thanks to the author.
0 commit comments