Spiking Neural Networks (SNNs) are a type of artificial neural network that closely mimic the way biological neurons communicate. They use spikes, or discrete events, to transmit information, making them more efficient in terms of energy consumption and processing speed.
This repository provides implementations of Spikformer-like models, which combine the advantages of Transformers and SNNs. The main purpose of this repository is to help users easily understand the key concepts and ideas behind these models. Therefore, we provide notebook files with detailed explanations and simple implementations using PyTorch and snnTorch. Below is a list of the models included in this repository (or future works):
Model | Colab Link | Paper (Year) | Contributions |
---|---|---|---|
Spikformer | Zhou, Zhaokun, et al. (2022) | Spiking Self-Attention, Spiking Patch Splitting | |
Spike-driven Transformer | Yao, Man, et al. (2023) | Spike-driven Self-Attention, Membrane Shortcut | |
Spiking Token Mixer | - | Deng, Shikuang, et al. (2024) | |
One-step Spiking Transformer | - | Song, Xiaotian, et al. (2024) |
If you're not familiar with Spiking Neural Networks (SNNs) and snnTorch, please refer to the snnTorch Tutorials before beginning this notebook.
If you use conda, create a new Python 3.11 environment:
conda create -n spikformer-like-models python=3.11 -y
conda activate spikformer-like-models
Install the required packages using pip
:
pip install -r requirements.txt