Skip to content

erectbranch/Awesome-Activation-Sparsification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Awesome Activation Sparsification

A curated list of awesome neural network activation sparsification methods. Inspired by Awesome Model Quantization and Awesome Pruning.

Please feel free to contribute to add more papers.

Type of Sparsification

Type U S R T D Pre Post
Explanation Unstructured Structured Regularizer Threshold Dropout Pre-training Post-training

Survey Papers

2021

  • [ICML] Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

Papers

2025

  • [AAAI] From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers [U] [R] [Post] GitHub
  • [Coring] Prosparse: Introducing and enhancing intrinsic activation sparsity within large language models [U] [R] [T] [Post] GitHub
  • [ICLR] Training-Free Activation Sparsity in Large Language Models [S] [T] GitHub
  • [ICLR] R-Sparse: Rank-Aware Activation Sparsity for Efficient LLM Inference [S] [T] GitHub
  • [ICML] La RoSA: Enhancing LLM Efficiency via Layerwise Rotated Sparse Activation [U]

2024

  • [COLT] Learning Neural Networks with Sparse Activations [U] [Pre]
  • [ICLR] Deep Neural Network Initialization with Sparsity Inducing Activations [U] [T] [Pre] GitHub
  • [ICLR] SAS: Structured Activation Sparsification [S] [Pre] GitHub
  • [NeurIPS] Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders [U] [R] [Post]
  • [NeurIPS] Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion [U] [R] [Post] GitHub
  • [NeurIPSW] Post-Training Statistical Calibration for Higher Activation Sparsity [U] [T] [Post] GitHub
  • [EMNLP] CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification [S] [T]
  • [WACV] CATS: Combined Activation and Temporal Suppression for Efficient Network Inference [U] [R] [T] [Post] GitHub

2023

  • [arXiv] ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models [U] [Post]
  • [CVPR] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer [U] [Post] GitHub
  • [CVPRW] STAR: Sparse Thresholded Activation under partial-Regularization for Activation Sparsity Exploration [U] [R] [T] [Post]
  • [ICCVW] Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity [S] [Post] GitHub
  • [SIGIR] Representation Sparsification with Hybrid Thresholding for Fast SPLADE-based Document Retrieval [U] [R] [T] [Post] GitHub

2022

  • [DSD] ARTS: An adaptive regularization training schedule for activation sparsity exploration [U] [R] [Post]

2020

  • [ICML] Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks [U] [R] [T] [Post]

2019

  • [arXiv] How Can We Be So Dense? The Benefits of Using Highly Sparse Representations [U] [Pre] GitHub
  • [CVPR] Accelerating Convolutional Neural Networks via Activation Map Compression [U] [R] [Post]
  • [ICTAI] DASNet: Dynamic Activation Sparsity for Neural Network Efficiency Improvement [U] [D] [Post]

2018

  • [HPCA] Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks [U] [D] [Pre]

2015

  • [NeurIPS] Winner-Take-All Autoencoders [U] [Pre]

Relevant Awesome Lists

About

A curated list of neural network activation sparsification resources.

Topics

Resources

License

Stars

Watchers

Forks

Contributors