Skip to content

Commit 610c8a2

Browse files
Merge pull request #2 from MartaAndronic/testbench
Updated README and requirements
2 parents a95d829 + b8a40ff commit 610c8a2

File tree

3 files changed

+126
-39
lines changed

3 files changed

+126
-39
lines changed

README.md

Lines changed: 120 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,58 +1,119 @@
1-
# NeuraLUT
2-
[NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions](https://arxiv.org/abs/2403.00849)
1+
# NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
32

4-
NeuraLUT is the first quantized neural network training methodology that maps dense and full-precision sub-networks with skip-connections to LUTs to leverage the underlying structure of the FPGA architecture. This project is a derivative work based on LogicNets (https://github.com/Xilinx/logicnets) which is licensed under the Apache License 2.0. This code is part of a publication from the International Conference on Field-Programmable Logic and Applications 2024 which can be found on [ArXiv](https://arxiv.org/abs/2403.00849).
3+
[![DOI](https://img.shields.io/badge/DOI-10.1109/FPL64840.2024.00028-orange)](https://doi.org/10.1109/FPL64840.2024.00028)
4+
[![arXiv](https://img.shields.io/badge/arXiv-2403.00849-b31b1b.svg?style=flat)](https://arxiv.org/abs/2403.00849)
55

6-
## Setup
7-
**Install Vivado Design Suite**
6+
<p align="left">
7+
<img src="logo.png" width="500" alt="NeuraLUT Logo">
8+
</p>
89

9-
**Create a Conda environment**
10+
NeuraLUT is the first quantized neural network training methodology that maps dense and full-precision sub-networks with skip-connections to LUTs to leverage the underlying structure of the FPGA architecture.
11+
> _Built on top of [LogicNets](https://github.com/Xilinx/logicnets), NeuraLUT introduces new architecture designs, optimized training flows, and innovative sparsity handling._
12+
---
1013

11-
Requirements:
12-
* python=3.8
13-
* pytorch==1.4.0
14-
* torchvision
14+
#### ✨ New! ReducedLUT branch available for advanced compression using don't-cares (see below).
1515

16-
## Install Brevitas
16+
---
17+
18+
## 🚀 Features
19+
20+
- 🔧 **Quantized training** with sub-networks synthesized into truth tables.
21+
- ⚡️ **Skip connections within LUTs** for better gradient flow and performance.
22+
- 🎯 **Easy FPGA integration** using Vivado and Verilator.
23+
- 📊 **Experiment tracking** with [Weights & Biases](https://wandb.ai/).
24+
- 🧠 Supports **MNIST** and **Jet Substructure Tagging**.
25+
- 🧪 Integration with [Brevitas](https://github.com/Xilinx/brevitas) for quantization-aware training.
26+
27+
---
28+
29+
## 🛠️ Quickstart Guide
30+
31+
### 1. Set Up Conda Environment
32+
33+
> Requires [Miniconda](https://docs.conda.io/en/latest/miniconda.html)
34+
35+
```bash
36+
conda create -n neuralut python=3.12.4
37+
conda activate neuralut
38+
pip install torch==2.4.0 torchvision==0.19.0
1739
```
40+
41+
👉 For CUDA-specific instructions, refer to the [PyTorch installation guide](https://pytorch.org/get-started/locally/).
42+
43+
### 2. Install Brevitas
44+
45+
```bash
1846
conda install -y packaging pyparsing
1947
conda install -y docrep -c conda-forge
2048
pip install --no-cache-dir git+https://github.com/Xilinx/brevitas.git@67be9b58c1c63d3923cac430ade2552d0db67ba5
2149
```
2250

23-
## Install NeuraLUT package
24-
```
25-
git clone https://github.com/MartaAndronic/NeuraLUT
51+
### 3. Install Project Dependencies
52+
53+
```bash
54+
pip install -r requirements.txt
2655
cd NeuraLUT
2756
pip install .
2857
```
29-
## Install wandb + login
30-
```
58+
59+
### 4. Enable Experiment Tracking
60+
61+
```bash
3162
pip install wandb
3263
wandb.login()
3364
```
34-
## Install Vivado
35-
## Install oh-my-xilinx
65+
66+
---
67+
68+
## 🔧 Optional Tools (for Hardware Integration)
69+
70+
### ✅ Vivado Design Suite
71+
72+
Download and install from [Xilinx Vivado](https://www.xilinx.com/products/design-tools/vivado.html).
73+
📌 _Used version in our experiments: **Vivado 2020.1**._
74+
75+
### ✅ Verilator
76+
77+
```bash
78+
nix-store --realise /nix/store/q12yxbndfwibfs5jbqwcl83xsa5b0dh8-verilator-4.110
3679
```
37-
git clone https://github.com/ollycassidy13/oh-my-xilinx /path/to/local/dir
80+
81+
### ✅ oh-my-xilinx
82+
83+
```bash
84+
git clone https://github.com/ollycassidy13/oh-my-xilinx.git /path/to/local/dir
3885
export OHMYXILINX=/path/to/local/dir
3986
```
40-
## Summary of major modifications from LogicNets
41-
* We present a novel way of designing deep NNs with specific sparsity patterns that resemble sparsely connected dense partitions, enabling the encapsulation of sub-networks
42-
entirely within a single LUT. We enhance the training by integrating skip-connections
43-
in our sub-networks which facilitate the flow of gradients,
44-
promoting stable and efficient learning.
45-
* Both NeuraLUT and LogicNets are capable of training on the Jet Substructure Tagging dataset. Additionally, NeuraLUT offers compatibility with the MNIST dataset.
46-
* Introducing novel model architectures, NeuraLUT's distinct structures are detailed in our accompanying paper.
47-
* NeuraLUT is tailored for optimal GPU utilization.
48-
* To track experiments NeuraLUT uses WandB insetad of TensorBoard.
49-
* While LogicNets enforces an a priori sparsity by utilizing a weight mask that deactivates specific weights, NeuraLUT takes a different approach. It doesn't employ a weight mask but rather utilizes a feature mask (FeatureMask), which reshapes the feature vector to incorporate only fanin features per output channel.
50-
* NeuraLUT introduces a completely new forward function that contains multiple fully-connected layers with skip-connections.
51-
* The function "calculate_truth_tables" was adapted to align with the NeuraLUT neuron structure, and it was also improved for efficiency.
52-
53-
## Citation
54-
Should you find this work valuable, we kindly request that you consider referencing our paper as below:
55-
```
87+
88+
---
89+
90+
## 🌿 ReducedLUT
91+
92+
We released a dedicated [ReducedLUT branch](https://github.com/MartaAndronic/NeuraLUT/tree/reducedlut) which demonstrates the **L-LUT compression pipeline** described in our ReducedLUT paper. This includes:
93+
94+
📄 [arXiv](https://arxiv.org/abs/2412.18579) | 📘 [ACM DL](https://dl.acm.org/doi/10.1145/3706628.3708823) | 📦 [Zenodo](https://doi.org/10.5281/zenodo.14499541)
95+
96+
---
97+
98+
## 🧬 What's New in NeuraLUT vs LogicNets?
99+
100+
| Feature | LogicNets | NeuraLUT |
101+
|--------|-----------|-----------|
102+
| **Dataset Support** | Jet Substructure | Jet Substructure, MNIST |
103+
| **Training Flow** | Weight mask for sparsity | FeatureMask for input channel control |
104+
| **Forward Function** | Basic FC layers | Multiple FCs + Skip Connections |
105+
| **Experiment Logging** | TensorBoard | Weights & Biases |
106+
| **GPU Integration** |||
107+
| **Neuron Enumeration** | Basic LUT inference | Batched truth table gen |
108+
| **Architecture Customization** | Limited | Novel model designs described in paper |
109+
110+
---
111+
112+
## 📚 Citation
113+
114+
#### If this repo contributes to your research or FPGA design, please cite our NeuraLUT paper:
115+
116+
```bibtex
56117
@inproceedings{andronic2024neuralut,
57118
author = "Andronic, Marta and Constantinides, George A.",
58119
title = "{NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions}",
@@ -62,4 +123,27 @@ Should you find this work valuable, we kindly request that you consider referenc
62123
year = 2024,
63124
note = "doi: 10.1109/FPL64840.2024.00028"
64125
}
65-
```
126+
```
127+
#### If ReducedLUT contributes to your research please also cite:
128+
```bibtex
129+
@inproceedings{reducedlut,
130+
author = {Cassidy, Oliver and Andronic, Marta and Coward, Samuel and Constantinides, George A.},
131+
title = "{ReducedLUT: Table Decomposition with ``Don't Care'' Conditions}",
132+
year = {2025},
133+
isbn = {9798400713965},
134+
publisher = {Association for Computing Machinery},
135+
address = {New York, NY, USA},
136+
note = "doi: 10.1145/3706628.3708823",
137+
booktitle = {Proceedings of the 2025 ACM/SIGDA International Symposium on Field Programmable Gate Arrays},
138+
pages = {36–42},
139+
location = {Monterey, CA, USA},
140+
}
141+
```
142+
---
143+
144+
## 🤝 Acknowledgements
145+
146+
NeuraLUT builds on foundational work from [LogicNets](https://github.com/Xilinx/logicnets) (Apache 2.0).
147+
Special thanks to the open-source hardware ML community for their inspiration and contributions.
148+
149+
---

logo.png

301 KB
Loading

requirements.txt

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
torch
16-
brevitas
17-
pyverilator
15+
pyverilator==0.7.0
16+
numpy==1.26.4
17+
h5py==3.12.1
18+
pandas==2.2.3
19+
scikit-learn==1.5.2
20+
PyYAML==6.0.1

0 commit comments

Comments
 (0)