Skip to content

Commit ccd9fbb

Browse files
Merge pull request #760 from mlcommons/announcement_update
Announcement update
2 parents 94b8e54 + 9e3d41a commit ccd9fbb

File tree

4 files changed

+13
-7
lines changed

4 files changed

+13
-7
lines changed

CALL_FOR_SUBMISSIONS.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@ Submissions can compete under two hyperparameter tuning rulesets (with separate
1717
- **Registration deadline to express non-binding intent to submit: February 28th, 2024**.\
1818
Please fill out the (mandatory but non-binding) [**registration form**](https://forms.gle/K7ty8MaYdi2AxJ4N8).
1919
- **Submission deadline: April 04th, 2024** *(moved by a week from the initial March 28th, 2024)*
20-
- **Deadline for self-reporting preliminary results: May 28th, 2024**
2120
- [tentative] Announcement of all results: July 15th, 2024
2221

2322
For a detailed and up-to-date timeline see the [Competition Rules](/COMPETITION_RULES.md).

COMPETITION_RULES.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,6 @@ The Competition begins at 12:01am (ET) on November 28, 2023 and ends at 11:59pm
4343

4444
- **Intention to Submit.** You must register your Intention to Submit no later than 11:59pm ET on February 28, 2024.
4545
- **Submission Period.** You must complete your Submission and enter it after the Intention to Submit deadline, but no later than 11:59pm ET on April 04, 2024.
46-
- **Deadline for self-reporting results.** 11:59pm ET on May 28, 2024.
4746

4847
## Agreement to Official Rules
4948

@@ -65,8 +64,6 @@ There are four (4) steps to a successful submission ("Submission").
6564

6665
The form is sent to the working group chairs, who will process your Submission. Failure to complete the proper Submission Forms will results in disqualification of your Submission. At the close of the Submission Period, your GitHub repository must be public.
6766

68-
4. **Report Results.** Prior to the Deadline for self-reporting results, run your Submission on either the qualification set or the full benchmark set and report the results. You must report your scores by uploading all unmodified logs that the benchmarking codebase automatically generates in a separate `/results` directory within the `/submission` folder of your Submission's GitHub repository.
69-
7067
## Submission Conditions
7168

7269
All Submissions must meet the requirements of the terms contained in these rules, including reliance on new algorithmic or mathematical ideas and concepts, and must not use software engineering approaches in order to increase primitive operations in PyTorch, JAX, their dependencies, the operating systems, or the hardware. By entering, all Team members warrant that their Submission does not infringe any third party's rights, and that Team members have obtained all necessary permissions from all relevant third parties to submit the Submission. If, in the sole discretion of Sponsor, any Submission constitutes copyright or other intellectual property infringement, the Submission will be disqualified. Team must hold all rights through license or ownership to the entire Submission. Team members agree to indemnify Sponsor against any and all claims of infringement from any third party for any use by Sponsor of a Submission. Team members may not be: 1) represented under contract that would limit or impair Sponsor's ability to use the Submission; or 2) are under any other contractual relationship, including but not limited to guild and/or union memberships, that may prohibit them from participating fully in this Competition, or from allowing Sponsor to use royalty-free, the Submission worldwide in all media in perpetuity.

DOCUMENTATION.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -400,6 +400,8 @@ Submissions will be scored based on their performance on the [fixed workload](#f
400400

401401
Furthermore, a less computationally expensive subset of the fixed workloads is collected with the [qualification set](#qualification-set). Submitters without enough compute resources to self-report on the full set of fixed and held-out workloads can instead self-report on this smaller qualification set. Well-performing submissions can thereby qualify for computational resources provided by sponsors of the benchmark to be scored on the full benchmark set.
402402

403+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
404+
403405
#### Fixed workloads
404406

405407
The fixed workloads are fully specified with the call for submissions. They contain a diverse set of tasks such as image classification, machine translation, speech recognition, or other typical machine learning tasks. For a single task there might be multiple models and therefore multiple fixed workloads. The entire set of fixed workloads should have a combined runtime of roughly 100 hours on the [benchmarking hardware](#benchmarking-hardware).
@@ -429,6 +431,8 @@ Our scoring procedure uses the held-out workloads only to penalize submissions t
429431

430432
#### Qualification set
431433

434+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
435+
432436
The qualification set is designed for submitters that may not have the compute resources to self-report on the full set of [fixed](#fixed-workloads) and [held-out workloads](#randomized-workloads). They may instead self-report numbers on this smaller qualification set. The best-performing submissions may then qualify for compute sponsorship offering a free evaluation on the full benchmark set and therefore the possibility to win [awards and prizes](/COMPETITION_RULES.md#prizes).
433437

434438
The qualification set consists of the same [fixed workloads](#fixed-workloads) as mentioned above, except for both workloads on *ImageNet*, both workloads on *LibriSpeech*, and the *fastMRI* workload. The remaining three workloads (*WMT*, *Criteo 1TB*, and *OGBG*) form the qualification set. There are no [randomized workloads](#randomized-workloads) in the qualification set. The qualification set of workloads aims to have a combined runtime of roughly 24 hours on the [benchmarking hardware](#benchmarking-hardware).
@@ -449,6 +453,8 @@ All scored runs have to be performed on the benchmarking hardware to allow for a
449453
- 240 GB in RAM
450454
- 2 TB in storage (for datasets).
451455

456+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
457+
452458
For self-reported results, it is acceptable to perform the tuning trials on hardware different from the benchmarking hardware, as long as the same hardware is used for all tuning trials. Once the best trial, i.e. the one that reached the *validation* target the fastest, was determined, this run has to be repeated on the competition hardware. For example, submitters can tune using their locally available hardware but have to use the benchmarking hardware, e.g. via cloud providers, for the $5$ scored runs. This allows for a fair comparison to the reported results of other submitters while allowing some flexibility in the hardware.
453459

454460
#### Defining target performance
@@ -571,10 +577,14 @@ on the benchmarking hardware. We also recommend to do a dry run using a cloud in
571577

572578
#### Are we allowed to use our own hardware to self-report the results?
573579

580+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
581+
574582
You only have to use the benchmarking hardware for runs that are directly involved in the scoring procedure. This includes all runs for the self-tuning ruleset, but only the runs of the best hyperparameter configuration in each study for the external tuning ruleset. For example, you could use your own (different) hardware to tune your submission and identify the best hyperparameter configuration (in each study) and then only run this configuration (i.e. 5 runs, one for each study) on the benchmarking hardware.
575583

576584
#### What can I do if running the benchmark is too expensive for me?
577585

586+
NOTE: Submitters are no longer required to self-report results for AlgoPerf competition v0.5.
587+
578588
Submitters unable to self-fund scoring costs can instead self-report only on the [qualification set of workloads](/COMPETITION_RULES.md#qualification-set) that excludes some of the most expensive workloads. Based on this performance on the qualification set, the working group will provide - as funding allows - compute to evaluate and score the most promising submissions. Additionally, we encourage researchers to reach out to the [working group](mailto:[email protected]) to find potential collaborators with the resources to run larger, more comprehensive experiments for both developing and scoring submissions.
579589

580590
#### Can I submit previously published training algorithms as submissions?

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@
2727
---
2828

2929
> [!IMPORTANT]
30-
> Upcoming Deadline:
31-
> Submission deadline: **April 04th, 2024** (*moved by a week*). \
32-
> For submission instructions please see [Packaging your Submission Code](/GETTING_STARTED.md#package-your-submission-code) section in the Getting Started document.\
30+
> Submitters are no longer required to self-report results.
31+
> We are currently in the process of evaluating and scoring received submissions.
32+
> We are aiming to release results by July 15th 2024.
3333
> For other key dates please see [Call for Submissions](CALL_FOR_SUBMISSIONS.md).
3434
3535
## Table of Contents <!-- omit from toc -->

0 commit comments

Comments
 (0)