HoloMambaRec is a minimal, single-file reference implementation of holographic binding with Mamba state space models for sequential recommendation. It includes data loaders for Amazon Beauty and MovieLens-1M, model definitions (HoloMamba, SASRec, GRU4Rec, and an item-only ablation), and scripts for benchmarking, ablations, compression profiling, and lightweight grid search.
HoloMambaRec.py— end-to-end script (data prep, models, training loops, evals, and plotting).
Python 3.9+ and a CUDA GPU are recommended.
python3 -m venv .venv
source .venv/bin/activate
pip install torch mamba-ssm causal-conv1d pandas numpy matplotlib tqdm requests
# Optional (if you want parity with the Colab snippet in the code header):
# pip install datasets transformers google-generativeaiThe main entry point is HoloMambaRec.py. It will download datasets on first run into data/amazon-beauty and data/ml-1m.
python HoloMambaRec.pyWhat it does:
- Runs the benchmark loop on Amazon-Beauty and ML-1M with HoloMamba, SASRec, and GRU4Rec.
- Runs the binding vs. item-only ablation.
- Runs a grid search and writes
grid_search_results.csv(setGRID_MAX_TRIALS=0or a small integer to limit/skip it). - Runs compression/latency evaluations and saves plots.
Notes:
- The full run is compute-heavy; use a GPU and consider trimming epochs or commenting out sections in
__main__for quicker smoke tests. - Figures and CSVs are written to the repository root (e.g.,
learning_curve_amazon-beauty_hr.png,compression_runtime_ml-1m.png).
- Adjust global defaults in
GLOBAL_CONFIG(e.g.,d_model,n_layers,batch_size,epochs,use_compression). - Modify dataset-specific overrides inside
run_benchmarkandrun_compression_evals. - To evaluate only cold-start behavior or compression, call
evaluate_cold_startorrun_compression_evalsdirectly from a notebook after constructing your loaders.
If you use this codebase, please reference the HoloMambaRec project and include links back to this repository.