π Paper | π Project Page | π€ Weights | π Dataset
Less is More (LiMo) is a transformer-based visual navigation policy that predicts goal-conditioned SE(2) trajectories from a single RGB observation. We demonstrate that augmenting limited expert demonstrations with geometric planner-generated trajectories yields substantial performance improvements, achieving robust visual navigation through strategic data curation rather than simply collecting more data.
- Inference code
- Checkpoints released (SafeTensors on HuggingFace)
- Dataset and training code
- ROS integration
- Dataset builder with MPPI planner
This project uses uv for dependency management. We assume an NVIDIA GPU with driver version β₯530 (for CUDA 12.1 support).
git clone https://github.com/leggedrobotics/less-is-more \
&& cd less-is-more \
&& uv sync \
&& uv pip install -e .The codebase uses Hydra for configuration management in the limo package.
Download the pretrained LiMo checkpoint:
wget -O data/weights/limo_trained_on_D_aug.safetensors \
https://huggingface.co/yv1es/less-is-more/resolve/main/limo_trained_on_D_aug.safetensorsRun inference on the provided examples:
uv run limo/src/inference.pyThe default configuration in limo/configs/inference.yaml processes images from the example dataset in data/inference_example/.
The inference pipeline is configured via limo/configs/inference.yaml:
weights_path: Path to model weights (default:data/weights/limo_trained_on_D_aug.safetensors)input_path: Single image file or directory containing.jpg,.png, or.jpegfilesgoals_csv: CSV file defining navigation targets (format:x,y,yawin meters/radians, robot frame)camera_info: Optional YAML file with camera intrinsics for path projection visualizationoutput_dir: Directory for generated visualizations
See data/inference_example/ for a complete working example with sample images, goals, and camera calibration.
The training pipeline uses PyTorch Lightning for model training and Hydra for configuration management.
Train the model using: :
uv run limo/src/train.py experiment=train_limo_debug(this debug experiment config will only start a quick debug run to test your system)
The main entry point is limo/src/train.py, which:
- Loads configuration from
limo/configs/train.yamland an experiment config - Automatically pulls the dataset from HuggingFace
- Runs training with logging
- Saves checkpoints and weights
The training pipeline uses Hydra with the following structure:
Config directories (limo/configs/):
data/: Dataset configuration (e.g.,limo.yaml- LimoDataModule settings)model/: Model architecture (e.g.,limo.yaml- network configuration)trainer/: PyTorch Lightning trainer settingslogger/: Logging backends (e.g., wandb, csv)callbacks/: Training callbacks (e.g., checkpointing)experiment/: Complete experiment configs that override defaults
Experiment configs (limo/configs/experiment/):
train_limo_debug.yaml: Quick debug run with 5 epochs, limited batchestrain_limo_on_D_aug.yaml: Full training on augmented datasettrain_limo_side_cams.yaml: Training with side camera inputs
Override configs using the Hydra syntax:
# Use a specific experiment
uv run limo/src/train.py experiment=train_limo_on_D_aug
# Override specific parameters
uv run limo/src/train.py experiment=train_limo_debug data.batch_size=32 trainer.max_epochs=10
# Disable wandb logging
uv run limo/src/train.py experiment=train_limo_debug logger=nullBy default, training uses Weights & Biases for logging.
- Checkpoints: Saved to
logs/train/runs/<timestamp>/checkpoints/(PyTorch Lightning format) - SafeTensors weights: Automatically converted and saved to
logs/train/runs/<timestamp>/weights/after each checkpoint
LiMo's training data is based on Grand Tour dataset from HuggingFace: leggedrobotics/grand_tour_dataset
We added LiMo's data to the same HuggingFace repo.
Note: The dataset is automatically pulled from HuggingFace. No manual download required!
The Grand Tour dataset contains multiple mission recordings. Different sample types can be extracted:
- Teleoperation samples (
tel): Expert demonstrations from human teleoperation - Geometric samples (
geo): Trajectories generated by the MPPI geometric planner - Augmented samples (
aug): Combined set of teleoperation + geometric samples
Select the dataset type using the get_dataset() method in dataset/src/limo_datset.py:
from dataset.src.limo_datset import get_dataset
# Load teleoperation samples only
dataset_tel = get_dataset(
dataset_type="tel", # or "geo", "aug"
dataset_folder="data/dataset",
missions_csv="missions_split.csv",
with_side_cams=False
)
# Load augmented samples with side cameras
dataset_aug = get_dataset(
dataset_type="aug",
dataset_folder="data/dataset",
missions_csv="missions_split.csv",
with_side_cams=True
)The missions_split.csv file controls which missions are used and how they're split:
CSV Format:
Mission,Timestamp,Split
grandtour_mission_1,2024-01-15,train
grandtour_mission_2,2024-01-16,val
grandtour_mission_3,2024-01-17,test
Usage:
- Include only specific missions by adding rows to the CSV
- Control split ratios by adjusting the number of missions per split
- Quick debugging: Use a subset of missions (see
limo/configs/data/missions_split_debug.csv)
If you use this work in your research, please cite:
@misc{inglin2026morescalablevisualnavigation,
title={Less Is More: Scalable Visual Navigation from Limited Data},
author={Yves Inglin and Jonas Frey and Changan Chen and Marco Hutter},
year={2026},
eprint={2601.17815},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.17815},
}
