This repository curates research papers on robot manipulation, featuring a smaller collection of non-learning control methods and a larger body of learning-based approaches.
This repository will be continuously updated, and we warmly welcome contributions from the community. If you have papers, projects, or resources that are not yet included, please feel free to submit them via a pull request, open an issue for discussion or email us to add papers!
Our comprehensive survey is in progress—stay tuned for updates!
- [2025/08] Major revision of the classification system with a more refined taxonomy; substantial improvements across all sections.
- [2025/07] Expanded coverage of Dexterous, Soft Robotic, Mobile, Quadrupedal, and Humanoid Manipulation; refined the categorization and content for Awesome Simulators, Benchmarks, and Datasets,added non-learning-based control methods.
- [2025/06] Introduced new sections on Grasp in Cluttered Scenes, Quadrupedal and Humanoid Manipulation, and Learning from Human Demonstrations. Also improved the classification of the Applications section and added a subsection on Embodied QA Datasets.
- [2025/02] Added a new section on Bimanual Grasp.
- [2024/12] Introduced coverage of Dexterous Grasp.
- [2024/10] Repository is now public!
- 📝 Awesome Papers
- 📊 Awesome Simulators, Benchmarks and Datasets
- 🛠️ Awesome Techniques
Note: Other papers are summarized in the Contents.
Title | Venue | Date | Code |
---|---|---|---|
FLAME: A Federated Learning Benchmark for Robotic Manipulation | arXiv | 2025-03-03 | - |
Two by Two: Learning Multi-Task Pairwise Objects Assembly for Generalizable Robot Manipulation | CVPR 2025 | 2025-04-09 | |
FMB: a Functional Manipulation Benchmark for Generalizable Robotic Learning | IJRR 2024 | 2024-01-16 |
Title | Venue | Date | Code |
---|---|---|---|
OmniEAR: Benchmarking Agent Reasoning in Embodied Tasks | arXiv | 2024-08-07 | |
PAC Bench: Do Foundation Models Understand Prerequisites for Executing Manipulation Policies? | arXiv | 2024-06-30 | Project |
Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets | arXiv | 2024-05-21 | Huggingface |
PointArena: Probing Multimodal Grounding Through Language-Guided Pointing | arXiv | 2024-05-15 | |
ManipBench: Benchmarking Vision-Language Models for Low-Level Robot Manipulation | arXiv | 2024-05-14 | Project |
ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models | IROS 2024 | 2024-03-17 | |
OpenEQA: Embodied Question Answering in the Era of Foundation Models | CVPR 2024 | 2024 |
Title | Venue | Date | Code |
---|---|---|---|
Awesome-Dual-System-VLA: OpenHelix: A Short Survey, Empirical Analysis, and Open-Source Dual-System VLA Model for Robotic Manipulation | - | 2025-05-06 | |
Awesome-Implicit-NeRF-Robotics: Neural Fields in Robotics: A Survey | - | 2024-10-26 | |
Awesome-Robotics-3D | - | 2024-08-13 | |
Awesome-Video-Robotic-Papers | - | 2024-06-18 | |
awesome-humanoid-learning | - | 2024-01-16 | |
Awesome-Robotics-Foundation-Models: Foundation Models in Robotics: Applications, Challenges, and the Future | - | 2023-12-13 | |
Awesome-Generalist-Robots-via-Foundation-Models: Neural Fields in Robotics: A Survey | - | 2023-06-20 | |
Awesome-LLM-Robotics | - | 2022-08-12 |
This repository is developed and maintained by:
@misc{nkl-hmnei-cc-bci2024roboticsmanipulation,
title = {Awesome-Robotics-Manipulation},
author = {NKL-HMHEI and CC-BCI Group},
journal = {GitHub repository},
url = {https://github.com/BaiShuanghao/Awesome-Robotics-Manipulation},
year = {2024},
}
- National Key Laboratory of Human-Machine Hybrid-Enhanced Intelligence (NKL-HMHEI), Prof. Nanning Zheng [Google Scholar].
- XJTU Cognitive Computing and Brain-Computer Interaction (CC-BCI) Group, Prof. Badong Chen [Google Scholar].