Skip to content

Commit 55cdf1e

Browse files
authored
Add Cross-Layer Energy preprint (#250)
1 parent 3345609 commit 55cdf1e

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed

source/_data/SymbioticLab.bib

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1704,3 +1704,21 @@ @Article{venn:arxiv23
17041704
In this paper, we present Venn, an FL resource manager, that efficiently schedules ephemeral, heterogeneous devices among many FL jobs, with the goal of reducing their average job completion time (JCT). Venn formulates the Intersection Resource Scheduling (IRS) problem to identify complex resource contention among multiple FL jobs. Then, Venn proposes a contention-aware scheduling heuristic to minimize the average scheduling delay. Furthermore, it proposes a resource-aware device-to-job matching heuristic that focuses on optimizing response collection time by mitigating stragglers. Our evaluation shows that, compared to the state-of-the-art FL resource managers, Venn improves the average JCT by up to 1.88X.
17051705
}
17061706
}
1707+
1708+
@Article{crosslayer-energy:arxiv24,
1709+
author = {Jae-Won Chung and Mosharaf Chowdhury},
1710+
journal = {CoRR},
1711+
title = {Toward Cross-Layer Energy Optimizations in Machine Learning Systems},
1712+
year = {2024},
1713+
month = {Apr},
1714+
volume = {abs/2404.06675},
1715+
archiveprefix = {arXiv},
1716+
eprint = {2404.06675},
1717+
url = {https://arxiv.org/abs/2404.06675},
1718+
publist_confkey = {arXiv:2404.06675},
1719+
publist_link = {paper || https://arxiv.org/abs/2404.06675},
1720+
publist_topic = {Systems + AI},
1721+
publist_topic = {Energy-Efficient Systems},
1722+
publist_abstract = {The enormous energy consumption of machine learning (ML) and generative AI workloads shows no sign of waning, taking a toll on operating costs, power delivery, and environmental sustainability. Despite a long line of research on energy-efficient hardware, we found that software plays a critical role in ML energy optimization through two recent works: Zeus and Perseus. This is especially true for large language models (LLMs) because their model sizes and, therefore, energy demands are growing faster than hardware efficiency improvements. Therefore, we advocate for a cross-layer approach for energy optimizations in ML systems, where hardware provides architectural support that pushes energy-efficient software further, while software leverages and abstracts the hardware to develop techniques that bring hardware-agnostic energy-efficiency gains.
1723+
}
1724+
}

0 commit comments

Comments
 (0)