Skip to content

Commit d1e23cc

Browse files
authored
Update README.md
1 parent edff801 commit d1e23cc

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -275,7 +275,7 @@ python3 download_pdfs.py # The code is generated by Doubao AI
275275
|2024.11|🔥🔥[**SageAttention-2**] SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization(@thu-ml)|[[pdf]](https://arxiv.org/pdf/2411.10958)|[[SageAttention]](https://github.com/thu-ml/SageAttention) ![](https://img.shields.io/github/stars/thu-ml/SageAttention) | ⭐️⭐️ |
276276
|2024.11|🔥🔥[**Squeezed Attention**] SQUEEZED ATTENTION: Accelerating Long Context Length LLM Inference(@UC Berkeley) |[[pdf]](https://arxiv.org/pdf/2411.09688)|[[SqueezedAttention]](https://github.com/SqueezeAILab/SqueezedAttention) ![](https://img.shields.io/github/stars/SqueezeAILab/SqueezedAttention) | ⭐️⭐️ |
277277
|2024.12|🔥🔥[**TurboAttention**] TURBOATTENTION: EFFICIENT ATTENTION APPROXIMATION FOR HIGH THROUGHPUTS LLMS(@Microsoft)|[[pdf]](https://arxiv.org/pdf/2412.08585)| ⚠️ |⭐️⭐️ |
278-
|2025.01|🔥🔥[**FFPA**] FFPA: Yet another Faster Flash Prefill Attention with O(1) SRAM complexity for headdim > 256, ~1.5x faster than SDPA EA(@xlite-dev)|[[docs]](https://github.com/xlite-dev/ffpa-attn-mma)| [[ffpa-attn-mma]](https://github.com/xlite-dev/ffpa-attn-mma) ![](https://img.shields.io/github/stars/xlite-dev/ffpa-attn-mma)|⭐️⭐️ |
278+
|2025.01|🔥🔥[**FFPA**] FFPA: Yet another Faster Flash Prefill Attention with O(1) SRAM complexity for headdim > 256, ~1.5x faster than SDPA EA(@xlite-dev)|[[docs]](https://github.com/xlite-dev/ffpa-attn)| [[ffpa-attn]](https://github.com/xlite-dev/ffpa-attn) ![](https://img.shields.io/github/stars/xlite-dev/ffpa-attn)|⭐️⭐️ |
279279
|2025.03|🔥🔥[**SpargeAttention**] SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference(@thu-ml)|[[pdf]](https://arxiv.org/pdf/2502.18137)|[[SpargeAttn]](https://github.com/thu-ml/SpargeAttn) ![](https://img.shields.io/github/stars/thu-ml/SpargeAttn) | ⭐️⭐️ |
280280
|2025.04|🔥🔥[**MMInference**] MMInference: Accelerating Pre-filling for Long-Context Visual Language Models via Modality-Aware Permutation Sparse Attention(@microsoft) | [[pdf]](https://arxiv.org/pdf/2504.16083)|[[MInference]](https://github.com/microsoft/MInference/) ![](https://img.shields.io/github/stars/microsoft/MInference) | ⭐️⭐️ |
281281
|2025.04|🔥🔥[**Sparse Frontier**] The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs (@Cohere) | [[pdf]](https://arxiv.org/pdf/2504.17768)|[[SparseFrontier]](https://github.com/PiotrNawrot/sparse-frontier) ![](https://img.shields.io/github/stars/PiotrNawrot/sparse-frontier) | ⭐️⭐️ |

0 commit comments

Comments
 (0)