LiquidSpikeFormer is an advanced neuromorphic deep learning architecture specifically designed to tackle the complexities and challenges associated with processing sparse, asynchronous, and event-driven data from Dynamic Vision Sensors (DVS).
Leveraging the synergy of Spiking Neural Networks (SNNs), Liquid Neural Networks (LNNs), and state-of-the-art Transformer models, this project explores efficient and biologically plausible ways to significantly enhance accuracy, latency, and generalization performance in event-based recognition tasks.
How can we effectively and efficiently leverage the rich temporal dynamics and sparsity inherent in event-based sensor data (e.g., DVS) to achieve high-accuracy, low-latency, and power-efficient recognition tasks?
- Temporal Sparsity: Handling asynchronous, irregularly spaced event data.
- Multiscale Dynamics: Integrating both fine-scale rapid and coarse-scale slower temporal patterns.
- Biological Plausibility: Designing neuromorphic-friendly, energy-efficient architectures.
- Generalization: Achieving robust performance despite limited labeled data.
The LiquidSpikeFormer combines multiple novel modules:
- Dual (fine/coarse) spike encoders to capture rich multi-scale temporal information.
- Separate spatial and temporal convolutions for rich feature extraction from sparse spike-based events.
- Adaptive spiking dynamics through learnable liquid neural states, incorporating neuromodulation for context-dependent adaptation.
- Post-spiking transformers capturing long-term temporal dependencies and complex interactions within the spiking sequences.
- Enables deep supervision, reduces inference latency, and improves model efficiency during training.
Implementing this architecture is expected to:
- Significantly outperform traditional CNN/Transformer methods in event-based datasets.
- Achieve state-of-the-art accuracy and generalization on tasks like gesture recognition.
- Drastically reduce power consumption, making it ideal for real-time applications in IoT, robotics, autonomous systems, and neuromorphic hardware platforms.
To push this research frontier further, we plan to explore:
- Dynamically Neuromodulated Liquid Transformers: Adaptive attention and positional embeddings modulated by global spike dynamics.
- Meta-Spiking Fusion with Hyper-SNNs: Leveraging spike-based hypernetworks for adaptive fusion strategies.
- Liquid Reservoir Transformers: Incorporating reservoir computing dynamics directly into transformer positional encoding.
- Spiking Equilibrium Attention Networks (SEANs): Integrating equilibrium network dynamics to dynamically determine computation depth per input sequence.
- Self-supervised Spiking Contrastive Predictive Coding (S-SCPC): Developing powerful spike-level representation learning methods for significantly enhanced generalization.
Coming soon!
We welcome contributions to explore these exciting research directions:
- Open issues for new research ideas or experiments.
- Submit pull requests for improving existing implementations.
Coming soon!
For collaboration or questions, please reach out to:
- Kavyansh Tyagi
- GitHub: KAVYANSHTYAGI
Let's push the boundaries of neuromorphic deep learning together! 🚀