| title | Super Odometry |
|---|---|
| subtitle | Resilient Odometry via Hierarchical Adaptation |
| layout | page |
| show_sidebar | false |
| hide_hero | false |
| hide_footer | false |
| hero_height | is-large |
| hero_image | img/super_odometry/superodom_video.gif |
| hero_link | https://github.com/superxslam/SuperOdom |
| hero_link_text | See Our Code |
<div class="is-size-4 publication-authors" style="margin-top: 1.5rem; margin-bottom: 0.5rem;">
<span class="author-block" style="font-size: 95%;">
<a href="https://shibowing.github.io">Shibo Zhao</a><sup>1</sup>
<a href="https://github.com/StiphyJay">Sifan Zhou</a><sup>1</sup>
<a href="https://www.ri.cmu.edu/ri-people/yuchen-zhang/">Yuchen Zhang</a><sup>1</sup>
<a href="https://frc.ri.cmu.edu/~zhangji/">Ji Zhang</a><sup>1</sup>
<br>
<a href="https://sairlab.org/chenw/">Chen Wang</a><sup>2</sup>
<a href="https://www.ri.cmu.edu/ri-faculty/wenshan-wang/">Wenshan Wang</a><sup>1</sup>
<a href="https://www.ri.cmu.edu/ri-faculty/sebastian-scherer/">Sebastian Scherer</a><sup>1</sup>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><sup>1</sup>Carnegie Mellon University; </span>
<span class="author-block"><sup>2</sup>University at Buffalo</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<span class="link-block">
<a href="https://www.science.org/stoken/author-tokens/ST-3125/full" class="external-link button is-normal is-rounded is-dark" target="_blank">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>PDF</span>
</a>
</span>
<span class="link-block">
<a href="https://x.com/ShiboZhaoSLAM" class="external-link button is-normal is-rounded is-dark" target="_blank">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>Twitter</span>
</a>
</span>
<span class="link-block">
<a href="https://youtu.be/xpRZGgGaFRA" class="external-link button is-normal is-rounded is-dark" target="_blank">
<span class="icon">
<i class="fab fa-youtube"></i>
</span>
<span>Video</span>
</a>
</span>
<span class="link-block">
<a href="https://github.com/superxslam/SuperOdom" class="external-link button is-normal is-rounded is-dark" target="_blank">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code Stay Tuned</span>
</a>
</span>
<span class="link-block">
<a href="#bibtex" class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-quote-left"></i>
</span>
<span>Citation</span>
</a>
</span>
</div>
</div>
</div>
</div>
"The goal of super odometry is to achieve resilience, adaptation, and generalization across all-degraded environments"
<div style="text-align: center; margin-top: 1rem; padding: 2rem; background: #f8f9fa; border-radius: 15px; border-left: 5px solid #76b900;">
<p style="font-size: 1.2rem; color: #333; font-style: italic; font-weight: 500; line-height: 1.6;">
"For decades, odometry and SLAM have relied heavily on external sensors such as cameras and LiDAR. Super Odometry rethinks this paradigm by elevating inertial sensing to the core of state estimation, enabling robots to maintain reliable motion awareness even under extreme motion and severe perception degradation".
</p>
</div>
</div>
</div>
</div>
Video failed to load. Please check your connection or try refreshing the page.
When we walk through smoke or darkness, our body still knows where we are. This innate sense of motion, guided by vestibular and inertial perception known as path integration, reveals a profound truth: robust motion tracking begins not with vision, but with the body's internal sensing of movement.
Followed by this insight, we believe robotics systems also need a complementary sensing mechanism as "internal sense". Specifically, we developed a learned inertial module that learns robotics internal dynaimics and provides motion prior as fallback solution when external sensors like LiDAR and camera become unreliable. A key design goal behind Super Odometry is to unify resilience, adaptation, and generalization within a single odometry system.
</div>
</div>
</div>
Video failed to load. Please check your connection or try refreshing the page.
We propose a reciprocal fusion that combines traditional factor graph with the learned inertial module.
Nominal Conditions: The IMU network learns motion patterns from free pose labels generated by a lower-level factor graph.
Degraded Conditions: The learned IMU network take over, leveraging captured motion dynamics to maintain reliable state estimation when external perception fails.
In this way, robustness becomes adaptive, evolving with the robot’s operating conditions.
</div>
<div style="margin: 2rem 0; padding: 2.5rem; background: #fdfdfd; border-radius: 20px; border: 1px solid #eee; box-shadow: 0 10px 30px rgba(0,0,0,0.05); position: relative;">
<div style="position: absolute; top: -15px; left: 30px; background: #76b900; color: white; padding: 5px 20px; border-radius: 20px; font-weight: 800; font-family: monospace;">Hierarchical Adaptation Framework</div>
<div class="columns is-centered">
<div class="column is-four-fifths">
<img src="/img/science_robotics/method1.jpg" alt="Hierarchical Adaptation Framework" style="width: 120%; border-radius:10px; box-shadow: 0 10px 25px rgba(0,0,0,0.1);" />
</div>
</div>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6;">
To balance efficiency and robustness, the system uses a multi-level scheme to manage environmental degradation:
</p>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6; margin-top: 1rem;">
<strong>Lower Levels:</strong> Provide rapid, resource-efficient adjustments for mild disturbances.
</p>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6; margin-top: 1rem;">
<strong>Higher Levels:</strong> Provide more complex and computationally intensive interventions to support state estimation recovery.
</p>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6; margin-top: 1rem;">
This layered design enables the system to maintain efficiency under nominal conditions and robustness under extreme scenarios to meet the demands of diverse environments.
</p>
</div>
<div style="margin: 2rem 0; padding: 2.5rem; background: #fdfdfd; border-radius: 20px; border: 1px solid #eee; box-shadow: 0 10px 30px rgba(0,0,0,0.05); position: relative;">
<div style="position: absolute; top: -15px; left: 30px; background: #76b900; color: white; padding: 5px 20px; border-radius: 20px; font-weight: 800; font-family: monospace;">Heterogeneous Learning-Based Inertial Odometry</div>
<div class="columns is-centered">
<div class="column is-four-fifths">
<video
muted
autoplay
loop
playsinline
controls
preload="metadata"
style="border-radius:10px; background-color: white; box-shadow: 0 10px 25px rgba(0,0,0,0.1); width: 100%;"
onerror="this.style.display='none'; this.nextElementSibling.style.display='block';">
<source src="/img/science_robotics/learning_imu_odometry_intro.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<p style="display:none; color: #cc0000; text-align: center; padding: 1rem;">Video failed to load. Please check your connection or try refreshing the page.</p>
</div>
</div>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6; margin-top: 1rem;">
The learned inertial module is trained on more than 100 hours of heterogeneous robotic platform data, capturing comprehensive motion dynamics across aerial, wheeled, and legged robots. This enables the system to provide reliable motion priors when external sensors fail. Our IMU model outperforms different specialized IMU model across various robot platforms.
</p>
</div>
</div>
</div>
</div>
Super Odometry is evaluated under 13 consecutive types of hardware and environmental degradation in a single run including visual degradation, geometric degradation, mixed degradation and complete degradation. For more detail, please refer to this video.
<div style="margin: 2rem 0; padding: 2.5rem; background: #fdfdfd; border-radius: 20px; border: 1px solid #eee; box-shadow: 0 10px 30px rgba(0,0,0,0.05); position: relative;">
<div style="position: absolute; top: -15px; left: 30px; background: #76b900; color: white; padding: 5px 20px; border-radius: 20px; font-weight: 800; font-family: monospace;">Performance Under 13 Degradation</div>
<div class="columns is-centered">
<div class="column is-four-fifths">
<img src="/img/science_robotics/13_degradation_result.png" alt="13 Degradation Results" style="width: 100%; border-radius:10px; box-shadow: 0 10px 25px rgba(0,0,0,0.1);" />
</div>
</div>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6; margin-top: 1rem;">
The color-coded trajectory depicts our estimated odometry of a legged robot navigating through more than 13 complex degradation scenarios. Despite these difficulties, the final end-point drift was only 20 cm over a total distance of 2966 m.
</p>
</div>
</div>
</div>
</div>
Stress Test: Evaluation of 13 types of degradation in a single run.
Generalization: Robust odometry across diverse conditions with various sensors and robots.
Geometry Degradation: State-direction adaptation in long corridors.
Learning Inertial Network: Performance of IMU Pretrained Model.
Fallback Solution: Robust Performance in Smoke Scenario
Onboard Performance: Robust Odometry Performance for Exploration Tasks Using Onboard Device
The learning-based IMU model requires faster adaptation to new robots and environments. Despite generalizing well across platforms, it struggles with unseen domains because of distribution gaps between training and testing data. Incorporating both real-world and simulated IMU data could reduce this gap and improve generalization.
<div style="margin: 2rem 0; padding: 2.5rem; background: #fdfdfd; border-radius: 20px; border: 1px solid #eee; box-shadow: 0 10px 30px rgba(0,0,0,0.05); position: relative;">
<div style="position: absolute; top: -15px; left: 30px; background: #76b900; color: white; padding: 5px 20px; border-radius: 20px; font-weight: 800; font-family: monospace;">Online Learning & Adaptive Switching</div>
<p style="font-size: 1.0rem; color: #222; line-height: 1.6; margin-top: 1rem;">
Better online learning techniques are still required to achieve a balance between overfitting and catastrophic forgetting problems. We also need better strategies to achieve the switch scheme between learning IMU odometry and factor graph optimization. The current solution is a little heuristic, and we hope to convert this hierarchical adaptation into a completely learning-based solution.
</p>
</div>
</div>
</div>
</div>
The current journal version of Super Odometry builds upon the previously released Super Odometry v1 system. Readers are referred to Super Odometry v1 and SuperLoc for additional background and implementation details.
<div style="background: #f8f9fa; padding: 2rem; border-radius: 15px; border-left: 5px solid #76b900; margin-bottom: 2rem;">
<h3 class="title is-4" style="color: #333; margin-bottom: 1rem;">Manuscript Reviewers</h3>
<p style="margin-bottom: 0;">
We thank <strong>Yuheng Qiu</strong>, <strong>Michael Kaess</strong>, <strong>Sudharshan Suresh</strong>, and <strong>Shubham Tulsiani</strong> for their valuable suggestions for the manuscript.
</p>
</div>
<div style="background: #f8f9fa; padding: 2rem; border-radius: 15px; border-left: 5px solid #76b900;">
<h3 class="title is-4" style="color: #333; margin-bottom: 1rem;">Real-World Experiments</h3>
<p style="margin-bottom: 0;">
We sincerely appreciate the work of <strong>Raphael Blanchard</strong>, <strong>Honghao Zhu</strong>, <strong>Rushan Jiang</strong>, <strong>Haoxiang Sun</strong>, <strong>Tianhao Wu</strong>, <strong>Yuanjun Gao</strong>, <strong>Damanpreet Singh</strong>, <strong>Lucas Nogueira</strong>, <strong>Guofei Chen</strong>, <strong>Parv Maheshwari</strong>, <strong>Matthew Sivaprakasam</strong>, <strong>Sam Triest</strong>, <strong>Micah Nye</strong>, <strong>Yifei Liu</strong>, <strong>Steve Willits</strong>, <strong>John Keller</strong>, <strong>Jay Karhade</strong>, <strong>Yao He</strong>, <strong>Mukai Yu</strong>, <strong>Andrew Jong</strong>, and <strong>John Rogers</strong> for their help in real-world experiments.
</p>
</div>
</div>
</div>
</div>
@article{zhao2025resilient, doi = {10.1126/scirobotics.adv1818}, author = {Shibo Zhao and Sifan Zhou and Yuchen Zhang and Ji Zhang and Chen Wang and Wenshan Wang and Sebastian Scherer}, title = {Resilient odometry via hierarchical adaptation}, journal = {Science Robotics}, volume = {10}, number = {109}, pages = {eadv1818}, year = {2025}, url = {https://www.science.org/doi/abs/10.1126/scirobotics.adv1818}, eprint = {https://www.science.org/doi/pdf/10.1126/scirobotics.adv1818} }@inproceedings{zhao2021super, title={Super odometry: IMU-centric LiDAR-visual-inertial estimator for challenging environments}, author={Zhao, Shibo and Zhang, Hengrui and Wang, Peng and Nogueira, Lucas and Scherer, Sebastian}, booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages={8729--8736}, year={2021}, organization={IEEE} }

