Skip to content

Commit 5d32c6a

Browse files
authored
Several tweaks (#318)
1 parent dc85a1c commit 5d32c6a

File tree

3 files changed

+8
-11
lines changed

3 files changed

+8
-11
lines changed

source/funding/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,13 +49,13 @@ Any opinions, findings, and conclusions of our research projects are those of th
4949
![Ford](images/ford.png)
5050
![Cisco](images/cisco.png)
5151
![Google](images/google.png)
52-
![Slalom](images/slalom.png)
5352
</div>
5453

5554
## Past Sponsors
5655

5756
<div class='flex-row'>
5857

58+
![Slalom](images/slalom.png)
5959
![VMware](images/vmware.png)
6060
![Salesforce](images/salesforce.png)
6161
![Meta](images/meta.png)

source/publications/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,7 @@ venues:
159159
name: The 7th Conference on Machine Learning and Systems
160160
date: 2024-05-13
161161
url: https://mlsys.org/Conferences/2024
162+
acceptance: 22.02%
162163
- key: MLSys'23
163164
name: The 6th Conference on Machine Learning and Systems
164165
date: 2023-06-04

source/research/index.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -20,35 +20,31 @@ One of our key focus areas is multi-scale resource sharing for AI accelerators f
2020
We also work on planning and optimizing executions of distributed AI systems.
2121
Major projects include [Salus](https://github.com/SymbioticLab/Salus) and [Tiresias](https://github.com/SymbioticLab/Tiresias).
2222

23+
## [Energy-Efficient Systems](/publications/#/topic:Energy-Efficient%20Systems)
24+
The energy consumption of computing systems is increasing with the rising popularity of Big Data and AI.
25+
While the hardware community has invested considerable effort in energy optimizations, we observe that similar efforts on the software side are significantly lacking.
26+
[Our initiative](https://ml.energy) to understand and optimize the energy consumption of modern AI workloads is exposing new ways to understand energy consumption from software.
27+
Major projects include [Zeus](https://ml.energy/zeus), the first GPU energy-vs-training performance tradeoff optimizer for DNN training.
28+
2329
## [Disaggregation](/publications/#/topic:Disaggregation)
2430
Modern datacenters often overprovision application memory to avoid performance cliffs, leading to 50% underutilization on average.
2531
Our research addresses this fundamental problem via practical memory disaggregation, whereby an application can leverage both local and remote memory by leveraging high-speed networks, and more recently with emerging CXL technology.
2632
We are building systems that can ensure a disaggregated system with 100s of nanoseconds latency.
2733
We are generally interested in disaggregating all resources for fully utilized datacenters.
2834
Major projects include [Infiniswap](https://infiniswap.github.io/), the first practical memory disaggregation software, and [TPP](https://arxiv.org/abs/2206.02878).
2935

30-
3136
## [Wide-Area Computing](/publications/#/topic:Wide-Area%20Computing)
3237
Most data is generated outside cloud datacenters.
3338
Collecting voluminous remote data to a central location not only presents a bandwidth and storage problem but increasingly is likely to violate privacy regulations such as General Data Protection Regulation (GDPR).
3439
In these settings, data systems must minimize communication instead.
3540
We are developing systems, algorithms, and benchmarks to analyze data distributed across multiple cloud datacenters and end-user devices to enable geo-distributed/federated learning and analytics.
3641
Major projects include [FedScale](https://fedscale.ai/), the largest benchmark and a scalable and extensible platform for federated learning.
3742

38-
39-
## [Energy-Efficient Systems](/publications/#/topic:Energy-Efficient%20Systems)
40-
The energy consumption of computing systems is increasing with the rising popularity of Big Data and AI.
41-
While the hardware community has invested considerable effort in energy optimizations, we observe that similar efforts on the software side are significantly lacking.
42-
[Our initiative](https://ml.energy) to understand and optimize the energy consumption of modern AI workloads is exposing new ways to understand energy consumption from software.
43-
Major projects include [Zeus](https://ml.energy/zeus), the first GPU energy-vs-training performance tradeoff optimizer for DNN training.
44-
45-
4643
## [Datacenter Networking](/publications/#/topic:Datacenter%20Networking)
4744
We also work on network resource management schemes to isolate Big Data and AI systems at the edge and inside the datacenter network.
4845
Our recent focus has primarily been on emerging networking technologies such as low-latency RDMA-enabled networks, programmable switches, and SmartNICs.
4946
We are also interested in improving the existing networking infrastructure such as improving QoS for low-latency RPCs in datacenters.
5047
Major projects include [Aequitas](https://github.com/SymbioticLab/Aequitas) and [Justitia](https://github.com/SymbioticLab/Justitia).
5148

52-
5349
## [Big Data Systems](/publications/#/topic:Big%20Data%20Systems)
5450
In the recent past, we worked on designing and improving big data systems via new algorithms for resource scheduling, caching data in memory, and dynamic query planning to improve resource efficiency, application performance, and fairness.

0 commit comments

Comments
 (0)