Skip to content

Tsinghua-MARS-Lab/SLAM-Former

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SLAM-Former: Putting SLAM into One Transformer

arXiv Project Page

Yijun Yuan, Zhuoguang Chen, Kenan Li, Weibang Wang, Hang Zhao

IIIS, Tsinghua University

@article{slam-former,
      title={SLAM-Former: Putting SLAM into One Transformer}, 
      author={Yijun Yuan, Zhuoguang Chen, Kenan Li, Weibang Wang, and Hang Zhao},
      journal={arXiv preprint arXiv:2509.16909},
      year={2025}
}

Updates

  • [Mar 11, 2026] Released training code. See the train branch for details.
  • [Mar 4, 2026] Released SLAM code with KV pruning available.
  • [Feb 26, 2026] Provides the training data.
  • [Sep 24, 2025] Some good blogs can help you read SLAM-Former: here and here.
  • [Sep 23, 2025] Preprint release.

Getting Started

1. Clone SLAM-Former

git clone https://github.com/Tsinghua-MARS-Lab/SLAM-Former.git
cd SLAM-Former

2. Create conda environment

conda create -n SLAM-Former python=3.11
conda activate SLAM-Former 

3. Install requirements

pip install -r requirements.txt
pip install -e .

Running SLAM Demo

Prepare a folder containing your image sequence, then run:

python slam/demo.py \
    --ckpt_path ckpt/checkpoint.pth.model \
    --image_folder /path/to/your/images/ \
    --output_dir /output/result \
    --target_size 518 \
    --retention_ratio 0.5

Visualization

Real-time visualization during inference: add --vis to the command above. The 3D reconstruction process can be viewed interactively in Rerun.

Static visualization of saved results:

python slam/visualize_results.py \
    --result_dir /path/to/output_dir

Data

Checkpoint

  • Hugging Face — recommended to use --target_size 518 for inference.

About

SLAM-Former: Putting SLAM into One Transformer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors