Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions _posts/2020-09-28-grasp-deep-learning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
author: Boston Cleek
comments: false
date: 2020-09-28
layout: post
title: Grasping using Deep Learning
media_type: image
media_link: /assets/images/blog_posts/deep_learning_grasp/grasp.png
description: Grasping using Deep Learning now available in the MoveIt Task Constructor
categories:
- Deep Learning
- Grasping
- MoveIt
- 3D perception
---

[//]: # (Image References)
[image2]: /assets/images/blog_posts/deep_learning_grasp/image2.gif
[image3]: /assets/images/blog_posts/deep_learning_grasp/image3.gif

MoveIt now supports robust grasp pose generation using deep learning. Pick and place robots equipped with a depth camera and either a parallel jaw or suction gripper can increase productivity when paired with deep learning. The MoveIt Task Constructor provides an interface for any grasp pose generation algorithm making MoveIt’s pick and place capabilities more flexible and powerful.

Currently, the [Grasp Pose Detection](https://github.com/atenpas/gpd) (GPD) library and [Dex-Net](https://berkeleyautomation.github.io/dex-net/) are being used to detect 6-DOF grasp poses given 3D sensor data. GPD is capable of generating grasp poses for parallel jaw grippers and Dex-Net works with both parallel jaw and suction grippers. These neural networks are trained on datasets containing millions of images allowing them to pick novel objects from cluttered scenes.

The depth camera can either mount to a link on the robot or remain stationary. If the camera is mounted to a link or if multiple cameras are used, it is possible to reconstruct a 3D point cloud or collect depth images from multiple viewpoints. This technique enables grasp pose generators to sample more grasp candidates from views that would otherwise be occluded.

The UR5 below uses a grasp pose generated by GPD to pick up a box. The point cloud was acquired by the RealSense camera to the left of the robot.

![](/assets/images/blog_posts/deep_learning_grasp/image5.gif)

The animation below shows the capabilities of deep learning for grasp pose generation. Dex-Net achieves greater performance in terms of successfully grasping objects, reliability, and computational speed when compared to GPD.

|----------------|------------------|
|![image2] | ![image3] |
| | |
| | |

To learn more about how to use GPD and Dex-Net within MoveIt see the [Deep Grasp Tutorial](https://ros-planning.github.io/moveit_tutorials/doc/moveit_deep_grasps/moveit_deep_grasps_tutorial.html) and the [Deep Grasp Demo](https://github.com/PickNikRobotics/deep_grasp_demo). The demo contains detailed instructions for acquiring data by simulating depth sensors and executing motion plans in Gazebo.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.