Skip to content

Commit 23c9fd0

Browse files
authored
Merge pull request #302 from mosharaf/develop
Bug fix
2 parents 7a48e35 + ff77576 commit 23c9fd0

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

source/_data/SymbioticLab.bib

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1998,9 +1998,9 @@ @Article{curie:arxiv25
19981998
archivePrefix = {arXiv},
19991999
eprint = {2502.16069},
20002000
url = {https://arxiv.org/abs/2502.16069},
2001-
publist_link = {code || https://github.com/Just-Curieous/Curie},
20022001
publist_confkey = {arXiv:2502.16069},
20032002
publist_link = {paper || https://arxiv.org/abs/2502.16069},
2003+
publist_link = {code || https://github.com/Just-Curieous/Curie},
20042004
publist_topic = {Systems + AI},
20052005
publist_abstract = {
20062006
Scientific experimentation, a cornerstone of human progress, demands rigor in reliability, methodical control, and interpretability to yield meaningful results. Despite the growing capabilities of large language models (LLMs) in automating different aspects of the scientific process, automating rigorous experimentation remains a significant challenge. To address this gap, we propose Curie, an AI agent framework designed to embed rigor into the experimentation process through three key components: an intra-agent rigor module to enhance reliability, an inter-agent rigor module to maintain methodical control, and an experiment knowledge module to enhance interpretability. To evaluate Curie, we design a novel experimental benchmark composed of 46 questions across four computer science domains, derived from influential research papers, and widely adopted open-source projects. Compared to the strongest baseline tested, we achieve a 3.4× improvement in correctly answering experimental questions. Curie is open-sourced at https://github.com/Just-Curieous/Curie.
@@ -2016,9 +2016,9 @@ @Article{cornstarch:arxiv25
20162016
archivePrefix = {arXiv},
20172017
eprint = {2503.11367},
20182018
url = {https://arxiv.org/abs/2503.11367},
2019-
publist_link = {code || https://github.com/cornstarch-org/Cornstarch},
20202019
publist_confkey = {arXiv:2503.11367},
20212020
publist_link = {paper || https://arxiv.org/abs/2503.11367},
2021+
publist_link = {code || https://github.com/cornstarch-org/Cornstarch},
20222022
publist_topic = {Systems + AI},
20232023
publist_abstract = {
20242024
Multimodal large language models (MLLMs) extend the capabilities of large language models (LLMs) by combining heterogeneous model architectures to handle diverse modalities like images and audio. However, this inherent heterogeneity in MLLM model structure and data types makes makeshift extensions to existing LLM training frameworks unsuitable for efficient MLLM training.

0 commit comments

Comments
 (0)