Skip to content

Commit bb115b6

Browse files
committed
Add Grasping Deep Learning post
1 parent 9fe2954 commit bb115b6

File tree

5 files changed

+38
-0
lines changed

5 files changed

+38
-0
lines changed
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
---
2+
author: Boston Cleek
3+
comments: false
4+
date: 2020-09-28
5+
layout: post
6+
title: Grasping using Deep Learning
7+
media_type: image
8+
media_link: /assets/images/blog_posts/deep_learning_grasp/grasp.png
9+
description: Grasping using Deep Learning now available in the MoveIt Task Constructor
10+
categories:
11+
- Deep Learning
12+
- Grasping
13+
- MoveIt
14+
- 3D perception
15+
---
16+
17+
[//]: # (Image References)
18+
[image2]: /assets/images/blog_posts/deep_learning_grasp/image2.gif
19+
[image3]: /assets/images/blog_posts/deep_learning_grasp/image3.gif
20+
21+
MoveIt now supports robust grasp pose generation using deep learning. Pick and place robots equipped with a depth camera and either a parallel jaw or suction gripper can increase productivity when paired with deep learning. The MoveIt Task Constructor provides an interface for any grasp pose generation algorithm making MoveIt’s pick and place capabilities more flexible and powerful.
22+
23+
Currently, the [Grasp Pose Detection](https://github.com/atenpas/gpd) (GPD) library and [Dex-Net](https://berkeleyautomation.github.io/dex-net/) are being used to detect 6-DOF grasp poses given 3D sensor data. GPD is capable of generating grasp poses for parallel jaw grippers and Dex-Net works with both parallel jaw and suction grippers. These neural networks are trained on datasets containing millions of images allowing them to pick novel objects from cluttered scenes.
24+
25+
The depth camera can either mount to a link on the robot or remain stationary. If the camera is mounted to a link or if multiple cameras are used, it is possible to reconstruct a 3D point cloud or collect depth images from multiple viewpoints. This technique enables grasp pose generators to sample more grasp candidates from views that would otherwise be occluded.
26+
27+
The UR5 below uses a grasp pose generated by GPD to pick up a box. The point cloud was acquired by the RealSense camera to the left of the robot.
28+
29+
![](/assets/images/blog_posts/deep_learning_grasp/image5.gif)
30+
31+
The animation below shows the capabilities of deep learning for grasp pose generation. Dex-Net achieves greater performance in terms of successfully grasping objects, reliability, and computational speed when compared to GPD.
32+
33+
|----------------|------------------|
34+
|![image2] | ![image3] |
35+
| | |
36+
| | |
37+
38+
To learn more about how to use GPD and Dex-Net within MoveIt see the [Deep Grasp Tutorial](https://ros-planning.github.io/moveit_tutorials/doc/moveit_deep_grasps/moveit_deep_grasps_tutorial.html) and the [Deep Grasp Demo](https://github.com/PickNikRobotics/deep_grasp_demo). The demo contains detailed instructions for acquiring data by simulating depth sensors and executing motion plans in Gazebo.
60.4 KB
Loading
1.64 MB
Loading
1.93 MB
Loading
21 MB
Loading

0 commit comments

Comments
 (0)