You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions](https://arxiv.org/abs/2403.00849)
1
+
# NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
3
2
4
-
NeuraLUT is the first quantized neural network training methodology that maps dense and full-precision sub-networks with skip-connections to LUTs to leverage the underlying structure of the FPGA architecture. This project is a derivative work based on LogicNets (https://github.com/Xilinx/logicnets) which is licensed under the Apache License 2.0. This code is part of a publication from the International Conference on Field-Programmable Logic and Applications 2024 which can be found on [ArXiv](https://arxiv.org/abs/2403.00849).
NeuraLUT is the first quantized neural network training methodology that maps dense and full-precision sub-networks with skip-connections to LUTs to leverage the underlying structure of the FPGA architecture.
11
+
> _Built on top of [LogicNets](https://github.com/Xilinx/logicnets), NeuraLUT introduces new architecture designs, optimized training flows, and innovative sparsity handling._
12
+
---
10
13
11
-
Requirements:
12
-
* python=3.8
13
-
* pytorch==1.4.0
14
-
* torchvision
14
+
#### ✨ New! ReducedLUT branch available for advanced compression using don't-cares (see below).
15
15
16
-
## Install Brevitas
16
+
---
17
+
18
+
## 🚀 Features
19
+
20
+
- 🔧 **Quantized training** with sub-networks synthesized into truth tables.
21
+
- ⚡️ **Skip connections within LUTs** for better gradient flow and performance.
22
+
- 🎯 **Easy FPGA integration** using Vivado and Verilator.
23
+
- 📊 **Experiment tracking** with [Weights & Biases](https://wandb.ai/).
24
+
- 🧠 Supports **MNIST** and **Jet Substructure Tagging**.
25
+
- 🧪 Integration with [Brevitas](https://github.com/Xilinx/brevitas) for quantization-aware training.
* We present a novel way of designing deep NNs with specific sparsity patterns that resemble sparsely connected dense partitions, enabling the encapsulation of sub-networks
42
-
entirely within a single LUT. We enhance the training by integrating skip-connections
43
-
in our sub-networks which facilitate the flow of gradients,
44
-
promoting stable and efficient learning.
45
-
* Both NeuraLUT and LogicNets are capable of training on the Jet Substructure Tagging dataset. Additionally, NeuraLUT offers compatibility with the MNIST dataset.
46
-
* Introducing novel model architectures, NeuraLUT's distinct structures are detailed in our accompanying paper.
47
-
* NeuraLUT is tailored for optimal GPU utilization.
48
-
* To track experiments NeuraLUT uses WandB insetad of TensorBoard.
49
-
* While LogicNets enforces an a priori sparsity by utilizing a weight mask that deactivates specific weights, NeuraLUT takes a different approach. It doesn't employ a weight mask but rather utilizes a feature mask (FeatureMask), which reshapes the feature vector to incorporate only fanin features per output channel.
50
-
* NeuraLUT introduces a completely new forward function that contains multiple fully-connected layers with skip-connections.
51
-
* The function "calculate_truth_tables" was adapted to align with the NeuraLUT neuron structure, and it was also improved for efficiency.
52
-
53
-
## Citation
54
-
Should you find this work valuable, we kindly request that you consider referencing our paper as below:
55
-
```
87
+
88
+
---
89
+
90
+
## 🌿 ReducedLUT
91
+
92
+
We released a dedicated [ReducedLUT branch](https://github.com/MartaAndronic/NeuraLUT/tree/reducedlut) which demonstrates the **L-LUT compression pipeline** described in our ReducedLUT paper. This includes:
0 commit comments