Skip to content

Commit 96a0ed3

Browse files
authored
add EXP-Bench
1 parent 3e059f5 commit 96a0ed3

File tree

1 file changed

+19
-0
lines changed

1 file changed

+19
-0
lines changed

source/_data/SymbioticLab.bib

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2083,3 +2083,22 @@ @article{mlenergy-benchmark:arxiv25
20832083
As the adoption of Generative AI in real-world services grow explosively, \emph{energy} has emerged as a critical bottleneck resource. However, energy remains a metric that is often overlooked, under-explored, or poorly understood in the context of building ML systems. We present the ML.ENERGY Benchmark, a benchmark suite and tool for measuring inference energy consumption under realistic service environments, and the corresponding ML.ENERGY Leaderboard, which have served as a valuable resource for those hoping to understand and optimize the energy consumption of their generative AI services. In this paper, we explain four key design principles for benchmarking ML energy we have acquired over time, and then describe how they are implemented in the ML.ENERGY Benchmark. We then highlight results from the latest iteration of the benchmark, including energy measurements of 40 widely used model architectures across 6 different tasks, case studies of how ML design choices impact energy consumption, and how automated optimization recommendations can lead to significant (sometimes more than 40\%) energy savings without changing what is being computed by the model. The ML.ENERGY Benchmark is open-source and can be easily extended to various customized models and application scenarios.
20842084
}
20852085
}
2086+
2087+
@Article{expbench:arxiv25,
2088+
author = {Patrick Tser Jern Kon and Jiachen Liu and Xinyi Zhu and Qiuyi Ding and Jingjia Peng and Jiarong Xing and Yibo Huang and Yiming Qiu and Jayanth Srinivasa and Myungjin Lee and Mosharaf Chowdhury and Matei Zaharia and Ang Chen},
2089+
title = {{EXP-Bench}: Can AI Conduct AI Research Experiments?},
2090+
year = {2025},
2091+
month = {Jun},
2092+
volume = {abs/2505.24785},
2093+
archivePrefix = {arXiv},
2094+
eprint = {2505.24785},
2095+
url = {https://arxiv.org/abs/2505.24785},
2096+
publist_confkey = {arXiv:2502.16069},
2097+
publist_link = {paper || https://arxiv.org/abs/2505.24785},
2098+
publist_link = {code || https://github.com/Just-Curieous/Curie/tree/main/benchmark/exp_bench},
2099+
publist_link = {blog || https://www.just-curieous.com},
2100+
publist_topic = {Systems + AI},
2101+
publist_abstract = {
2102+
Automating AI research holds immense potential for accelerating scientific progress, yet current AI agents struggle with the complexities of rigorous, end-to-end experimentation. We introduce EXP-Bench, a novel benchmark designed to systematically evaluate AI agents on complete research experiments sourced from influential AI publications. Given a research question and incomplete starter code, EXP-Bench challenges AI agents to formulate hypotheses, design and implement experimental procedures, execute them, and analyze results. To enable the creation of such intricate and authentic tasks with high-fidelity, we design a semi-autonomous pipeline to extract and structure crucial experimental details from these research papers and their associated open-source code. With the pipeline, EXP-Bench curated 461 AI research tasks from 51 top-tier AI research papers. Evaluations of leading LLM-based agents, such as OpenHands and IterativeAgent on EXP-Bench demonstrate partial capabilities: while scores on individual experimental aspects such as design or implementation correctness occasionally reach 20-35%, the success rate for complete, executable experiments was a mere 0.5%. By identifying these bottlenecks and providing realistic step-by-step experiment procedures, EXP-Bench serves as a vital tool for future AI agents to improve their ability to conduct AI research experiments. EXP-Bench is open-sourced at https://github.com/Just-Curieous/Curie/tree/main/benchmark/exp_bench.
2103+
}
2104+
}

0 commit comments

Comments
 (0)