Skip to content

Commit 445dab9

Browse files
authored
🔥🔥[Context Distillation] Efficient LLM Context Distillation (#57)
1 parent e1ec282 commit 445dab9

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -262,6 +262,7 @@ Awesome-LLM-Inference: A curated list of [📙Awesome LLM Inference Papers with
262262
|2024.08|🔥🔥[**500xCompressor**] 500xCompressor: Generalized Prompt Compression for Large Language Models(@University of Cambridge) | [[pdf]](https://arxiv.org/pdf/2408.03094) | ⚠️ |⭐️⭐️ |
263263
|2024.08|🔥🔥[**Eigen Attention**] Eigen Attention: Attention in Low-Rank Space for KV Cache Compression(@purdue.edu) | [[pdf]](https://arxiv.org/pdf/2408.05646) | ⚠️ |⭐️⭐️ |
264264
|2024.09|🔥🔥[**Prompt Compression**] Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference(@Alterra AI)| [[pdf]](https://arxiv.org/pdf/2409.01227) | ⚠️ |⭐️⭐️ |
265+
|2024.09|🔥🔥[**Context Distillation**] Efficient LLM Context Distillation(@gatech.edu)| [[pdf]](https://arxiv.org/pdf/2409.01930) | ⚠️ |⭐️⭐️ |
265266

266267
### 📖Long Context Attention/KV Cache Optimization ([©️back👆🏻](#paperlist))
267268
<div id="Long-Context-Attention-KVCache"></div>

0 commit comments

Comments
 (0)