Michigan State University; University of North Carolina at Chapel Hill
@inproceedings{zhang2026towards,
title={Towards Intrinsic-Aware Monocular 3D Object Detection},
author={Zhang, Zhihao and Kumar, Abhinav and Liu, Xiaoming},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2026}
}- Abstract
- Updates
- Checklist
- Method Overview
- Installation
- Data Preparation
- Training & Evaluation
- Model Zoo
- Acknowledgements
- License
Monocular 3D object detection (Mono3D) aims to infer object locations and dimensions in 3D space from a single RGB image. Despite recent progress, existing methods remain highly sensitive to camera intrinsics and struggle to generalize across diverse settings, since intrinsic governs how 3D scenes are projected onto the image plane. We propose MonoIA, a unified intrinsic-aware framework that models and adapts to intrinsic variation through a language-grounded representation. The key insight is that intrinsic variation is not a numeric difference but a perceptual transformation that alters apparent scale, perspective, and spatial geometry. To capture this effect, MonoIA employs large language models and vision–language models to generate intrinsic embeddings that encode the visual and geometric implications of camera parameters. These embeddings are hierarchically integrated into the detection network via an Intrinsic Adaptation Module, allowing the model to modulate its feature representations according to camera-specific configurations and maintain consistent 3D detection across intrinsics. This shifts intrinsic modeling from numeric conditioning to semantic representation, enabling robust and unified perception across cameras. Extensive experiments show that MonoIA achieves new state-of-the-art results on standard benchmarks including KITTI, Waymo, and nuScenes (e.g., +1.18% on the KITTI leaderboard), and further improves performance under multi-dataset training (e.g., +4.46% on KITTI Val).
- [March 31, 2026] Released pretrained models and checkpoints.
- [March 31, 2026] Released official code and training logs.
- [Feb 12, 2026] MonoIA accepted by CVPR 2026.
✅ Code release
✅ Pretrained models
✅ Training logs
☐ nuScenes and Waymo datasets config
1. Clone the repository and create the conda environment:
git clone git@github.com:alanzhangcs/MonoIA.git
cd MonoIA
conda create -n monoia python=3.9
conda activate monoia2. Install PyTorch and torchvision (CUDA 12.1):
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia3. Install requirements and compile the deformable attention CUDA ops:
pip install -r requirements.txt
cd lib/models/monoia/ops/
bash make.sh
cd ../../../..Download the KITTI 3D Object Detection dataset and organize it under data/KITTIDataset/ as follows (this should match dataset.root_dir in the config):
MonoIA/
├── config/
├── data/
│ └── KITTIDataset/
│ ├── ImageSets/
│ ├── training/
│ │ ├── image_2/
│ │ ├── label_2/
│ │ └── calib/
│ └── testing/
│ ├── image_2/
│ └── calib/
nuScenes and Waymo dataset configs are coming soon.
Train on KITTI:
bash train.sh 0 --config config/monoia.yamlEvaluate on KITTI Val:
bash test.sh 0 --config config/monoia_val.yamlEvaluate for KITTI leaderboard (test set):
bash test.sh 0 --config config/monoia_leaderboard.yamlBy default, logs and checkpoints are saved under outputs/ (see trainer.save_path in the config). Training runs for 250 epochs with AdamW (lr=2e-4) and step-based LR decay at epochs [85, 125, 165, 205].
| Setting | Config | AP3D Easy | AP3D Mod. | AP3D Hard | Checkpoint | Training log |
|---|---|---|---|---|---|---|
| KITTI | config/monoia.yaml |
33.61 | 24.40 | 20.80 | Model | Log |
We provide KITTI test-set submissions on the official KITTI leaderboard:
| Setting | Config | AP3D Easy | AP3D Mod. | AP3D Hard | Checkpoint |
|---|---|---|---|---|---|
| KITTI | config/monoia_leaderboard.yaml |
29.52 | 19.11 | 17.93 | Model |
This project builds upon and adapts components from several excellent open-source projects:
We thank the authors for making their code publicly available.
This project is licensed under the MIT License. See LICENSE in the repository root for details.

