Replies: 2 comments
-
Not at the moment @NehaB18, we are however working on speeding the benchmark up and have already had a few drastic improvements (#572, #481). For retrieval there is an ongoing discussion over at #638 Implementing a downsampling function for the retrieval might be a reasonable solution to speeding it up. If you simply want to run it on a selected subset of the retrieval task you can do something like: import mteb
import random
tasks = mteb.get_tasks(languages = ["eng"], domains = ["Legal"], task_types = ["Retrieval"])
task_list = [t for t in tasks]
random.shuffle(task_list)
tasks_to_run = tasks[:10] # select the 10 first tasks |
Beta Was this translation helpful? Give feedback.
0 replies
-
@NehaB18 I will move this over to the discussions |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
is there any way to run the evaluation on the sample of datasets for example 5% of all Retrieval tasks?
Beta Was this translation helpful? Give feedback.
All reactions