You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: COMPETITION_RULES.md
-3Lines changed: 0 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,6 @@ The Competition begins at 12:01am (ET) on November 28, 2023 and ends at 11:59pm
43
43
44
44
-**Intention to Submit.** You must register your Intention to Submit no later than 11:59pm ET on February 28, 2024.
45
45
-**Submission Period.** You must complete your Submission and enter it after the Intention to Submit deadline, but no later than 11:59pm ET on April 04, 2024.
46
-
-**Deadline for self-reporting results.** 11:59pm ET on May 28, 2024.
47
46
48
47
## Agreement to Official Rules
49
48
@@ -65,8 +64,6 @@ There are four (4) steps to a successful submission ("Submission").
65
64
66
65
The form is sent to the working group chairs, who will process your Submission. Failure to complete the proper Submission Forms will results in disqualification of your Submission. At the close of the Submission Period, your GitHub repository must be public.
67
66
68
-
4.**Report Results.** Prior to the Deadline for self-reporting results, run your Submission on either the qualification set or the full benchmark set and report the results. You must report your scores by uploading all unmodified logs that the benchmarking codebase automatically generates in a separate `/results` directory within the `/submission` folder of your Submission's GitHub repository.
69
-
70
67
## Submission Conditions
71
68
72
69
All Submissions must meet the requirements of the terms contained in these rules, including reliance on new algorithmic or mathematical ideas and concepts, and must not use software engineering approaches in order to increase primitive operations in PyTorch, JAX, their dependencies, the operating systems, or the hardware. By entering, all Team members warrant that their Submission does not infringe any third party's rights, and that Team members have obtained all necessary permissions from all relevant third parties to submit the Submission. If, in the sole discretion of Sponsor, any Submission constitutes copyright or other intellectual property infringement, the Submission will be disqualified. Team must hold all rights through license or ownership to the entire Submission. Team members agree to indemnify Sponsor against any and all claims of infringement from any third party for any use by Sponsor of a Submission. Team members may not be: 1) represented under contract that would limit or impair Sponsor's ability to use the Submission; or 2) are under any other contractual relationship, including but not limited to guild and/or union memberships, that may prohibit them from participating fully in this Competition, or from allowing Sponsor to use royalty-free, the Submission worldwide in all media in perpetuity.
Copy file name to clipboardExpand all lines: DOCUMENTATION.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -400,6 +400,8 @@ Submissions will be scored based on their performance on the [fixed workload](#f
400
400
401
401
Furthermore, a less computationally expensive subset of the fixed workloads is collected with the [qualification set](#qualification-set). Submitters without enough compute resources to self-report on the full set of fixed and held-out workloads can instead self-report on this smaller qualification set. Well-performing submissions can thereby qualify for computational resources provided by sponsors of the benchmark to be scored on the full benchmark set.
402
402
403
+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
404
+
403
405
#### Fixed workloads
404
406
405
407
The fixed workloads are fully specified with the call for submissions. They contain a diverse set of tasks such as image classification, machine translation, speech recognition, or other typical machine learning tasks. For a single task there might be multiple models and therefore multiple fixed workloads. The entire set of fixed workloads should have a combined runtime of roughly 100 hours on the [benchmarking hardware](#benchmarking-hardware).
@@ -429,6 +431,8 @@ Our scoring procedure uses the held-out workloads only to penalize submissions t
429
431
430
432
#### Qualification set
431
433
434
+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
435
+
432
436
The qualification setis designed for submitters that may not have the compute resources to self-report on the full set of [fixed](#fixed-workloads) and [held-out workloads](#randomized-workloads). They may instead self-report numbers on this smaller qualification set. The best-performing submissions may then qualify for compute sponsorship offering a free evaluation on the full benchmark set and therefore the possibility to win [awards and prizes](/COMPETITION_RULES.md#prizes).
433
437
434
438
The qualification set consists of the same [fixed workloads](#fixed-workloads) as mentioned above, except for both workloads on *ImageNet*, both workloads on *LibriSpeech*, and the *fastMRI* workload. The remaining three workloads (*WMT*, *Criteo 1TB*, and *OGBG*) form the qualification set. There are no [randomized workloads](#randomized-workloads) in the qualification set. The qualification set of workloads aims to have a combined runtime of roughly 24 hours on the [benchmarking hardware](#benchmarking-hardware).
@@ -449,6 +453,8 @@ All scored runs have to be performed on the benchmarking hardware to allow for a
449
453
-240GBinRAM
450
454
-2TBin storage (for datasets).
451
455
456
+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
457
+
452
458
For self-reported results, it is acceptable to perform the tuning trials on hardware different from the benchmarking hardware, aslongas the same hardware is used forall tuning trials. Once the best trial, i.e. the one that reached the *validation* target the fastest, was determined, this run has to be repeated on the competition hardware. For example, submitters can tune using their locally available hardware but have to use the benchmarking hardware, e.g. via cloud providers, for the $5$ scored runs. This allows for a fair comparison to the reported results of other submitters while allowing some flexibility in the hardware.
453
459
454
460
#### Defining target performance
@@ -571,10 +577,14 @@ on the benchmarking hardware. We also recommend to do a dry run using a cloud in
571
577
572
578
#### Are we allowed to use our own hardware to self-report the results?
573
579
580
+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
581
+
574
582
You only have to use the benchmarking hardware for runs that are directly involved in the scoring procedure. This includes all runs for the self-tuning ruleset, but only the runs of the best hyperparameter configuration in each study for the external tuning ruleset. For example, you could use your own (different) hardware to tune your submission and identify the best hyperparameter configuration (in each study) and then only run this configuration (i.e. 5 runs, one for each study) on the benchmarking hardware.
575
583
576
584
#### What can I do if running the benchmark is too expensive for me?
577
585
586
+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
587
+
578
588
Submitters unable to self-fund scoring costs can instead self-report only on the [qualification set of workloads](/COMPETITION_RULES.md#qualification-set) that excludes some of the most expensive workloads. Based on this performance on the qualification set, the working group will provide - as funding allows - compute to evaluate and score the most promising submissions. Additionally, we encourage researchers to reach out to the [working group](mailto:[email protected]) to find potential collaborators with the resources to run larger, more comprehensive experiments for both developing and scoring submissions.
579
589
580
590
#### Can I submit previously published training algorithms as submissions?
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,9 +27,9 @@
27
27
---
28
28
29
29
> [!IMPORTANT]
30
-
> Upcoming Deadline:
31
-
> Submission deadline: **April 04th, 2024** (*moved by a week*). \
32
-
> For submission instructions please see [Packaging your Submission Code](/GETTING_STARTED.md#package-your-submission-code) section in the Getting Started document.\
30
+
> Submitters are no longer required to self-report results.
31
+
> We are currently in the process of evaluating and scoring received submissions.
32
+
> We are aiming to release results by July 15th 2024.
33
33
> For other key dates please see [Call for Submissions](CALL_FOR_SUBMISSIONS.md).
0 commit comments