Python tools for participating in Neural Latents Benchmark '21.
Neural Latents Benchmark '21 (NLB'21) is a benchmark suite for unsupervised modeling of neural population activity. The suite includes four datasets spanning a variety of brain areas and experiments. The primary task in the benchmark is co-smoothing, or inference of firing rates of unseen neurons in the population.
This repo contains code to facilitate participation in NLB'21:
nlb_tools/has code to load and preprocess our dataset files, format data for modeling, and locally evaluate resultsexamples/tutorials/contains tutorial notebooks demonstrating basic usage ofnlb_toolsexamples/baselines/holds the code we used to run our baseline methods. They may serve as helpful references on more extensive usage ofnlb_toolsdata/contains the ground-truth evaluation data for the test split, made public in January 2026 with the end of the EvalAI challenge
The package can be installed with the following command:
pip install nlb-tools
However, to run the tutorial notebooks locally or make any modifications to the code, you should clone the repo. The package can then be installed with the following commands:
git clone https://github.com/neurallatents/nlb_tools.git
cd nlb_tools
pip install -e .
This package requires Python 3.7+ and was developed in Python 3.7, which is the Python version we recommend you use.
We recommend reading/running through examples/tutorials/basic_example.ipynb to learn how to use nlb_tools to load and
format data for our benchmark. You can also find Jupyter notebooks demonstrating running GPFA and SLDS for the benchmark in
examples/tutorials/.
For more information on the benchmark:
- our main webpage contains general information on our benchmark pipeline and introduces the datasets
- our EvalAI challenge is where past submissions are evaluated and displayed on the leaderboard. As of January 2026, new submissions will no longer be evaluated, but the test evaluation data is now public in the
data/directory of this repo - our datasets are available on DANDI: MC_Maze, MC_RTT, Area2_Bump, DMFC_RSG, MC_Maze_Large, MC_Maze_Medium, MC_Maze_Small
- our paper describes our motivations behind this benchmarking effort as well as various technical details and explanations of design choices made in preparing NLB'21
- our Slack workspace lets you interact directly with the developers and other participants. Please email
fpei6 [at] gatech [dot] edufor an invite link