You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NeuraLUT is the first quantized neural network training methodology that maps dense and full-precision sub-networks with skip-connections to LUTs to leverage the underlying structure of the FPGA architecture.
11
-
> _Built on top of [LogicNets](https://github.com/Xilinx/logicnets), NeuraLUT introduces new architecture designs, optimized training flows, and innovative sparsity handling._
10
+
NeuraLUT-Assemble (FCCM'25) extends our prior work by assembling multiple NeuraLUT neurons into tree structures with larger fan-in.
11
+
- The hardware-aware assembling strategy groups connections at the input of these tree structures, guided by our hardware-aware pruning method.
12
+
- This design achieves better trade-offs in LUT utilization, latency, and accuracy compared to the original NeuraLUT framework.
## NeuraLUT on the jet substructure tagging dataset
1
+
## NeuraLUT-Assemble on the jet substructure tagging dataset (CERNBox)
2
2
3
-
To reproduce the results in our paper follow the steps below. Subsequently, compile the Verilog files using the following settings (utilize Vivado 2020.1, target the xcvu9p-flgb2104-2-i FPGA part, use the Vivado Flow_PerfOptimized_high settings, and perform synthesis in the Out-of-Context (OOC) mode).
3
+
This folder provides the code and resources to reproduce our NeuraLUT-Assemble results on the CERNBox jet substructure tagging dataset.
4
4
5
-
## Download dataset
5
+
We also include a pretrained checkpoint in the test_demo folder so you can skip training and go straight to evaluation and hardware generation.
6
+
>These checkpoints are not the exact ones used in the paper but are provided for convenience and practice.
0 commit comments