Skip to content

Commit 359c1f4

Browse files
authored
Update 4_data_science.md
1 parent e352a48 commit 359c1f4

File tree

1 file changed

+11
-5
lines changed

1 file changed

+11
-5
lines changed

_projects/4_data_science.md

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
layout: page
3-
title: Automated data science
4-
description: Automating data science with foundation models
3+
title: Efficient Foundation Models
4+
description: Structuring context improves inference within foundation models, just as it does in classical statistical models.
55
img: assets/img/data_science.jpeg
66
importance: 3
77
category: methods
@@ -10,9 +10,15 @@ related_publications: true
1010

1111
{% include figure.liquid loading="eager" path="assets/img/data_science.jpeg" title="heterogeneity" class="img-fluid rounded z-depth-1" %}
1212

13-
Data science has traditionally taken extensive domain expertise and arduous human effort to design targeted experiments which isolate effects in observational data. We are actively building AI tools to automate and streamline these efforts.
13+
Structuring context improves inference—not only in statistical graphical models, but also within foundation models. This project investigates how explicit contextual structure can make foundation models more efficient, modular, and interpretable.
1414

15-
{% nocite bordt2024data %}
16-
{% nocite lengerich2023llms %}
15+
We focus on making large models practical for real-world deployment by aligning their internal mechanisms with the same principles that make classical models statistically efficient: modularity, conditional independence, and context-aware adaptation.
16+
17+
This work supports:
18+
- **Structured contextual inference** by building architectural and training constraints that reflect known structure.
19+
- **Modular reasoning** by decomposing complex predictions into composable parts.
20+
- **Deployment-readiness** through faster, lower-latency models that retain contextual sensitivity.
21+
22+
Recent work includes **FastCache**{% cite liu2025fastcache %}, an inference-time optimization for diffusion transformers, and ongoing development of **FastLM**, a framework for efficient, composable LLMs.
1723

1824
<br /><br />

0 commit comments

Comments
 (0)