This is a demonstration of a planar grasping system using a Franka FR3 robot arm. It leverages a trained GR-ConvNet model for grasp pose detection from RGB-D data and controls the robot via franky.
- Robot: Franka FR3
- Camera: Intel RealSense D435i
- Grasping Network: GR-CNN
- Control Interface: franky (
libfranka: 0.15.0)
- OS: Ubuntu 22.04
- Python: 3.10
- CUDA: 12.4
- PyTorch: 2.6.0
conda create -n grcnn python==3.10
conda activate grcnn
pip install torch==2.6.0 torchvision==0.21.0 --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
# libfranka 0.15.0
pip install franky-controlpython run_offline.pypython run_realtime.pyNote: This pipeline uses an eye-in-hand setup. Before use, please replace the calibration parameters in the code with your own.
python run_franky_grcnn_realtime.pyWe present a novel generative residual convolutional neural network based model architecture which detects objects in the camera’s field of view and predicts a suitable antipodal grasp configuration for the objects in the image.
This repository contains the implementation of the Generative Residual Convolutional Neural Network (GR-ConvNet) from the paper:
Sulabh Kumra, Shirin Joshi, Ferat Sahin
If you use this project in your research or wish to refer to the baseline results published in the paper, please use the following BibTeX entry:
@inproceedings{kumra2020antipodal,
author={Kumra, Sulabh and Joshi, Shirin and Sahin, Ferat},
booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network},
year={2020},
pages={9626-9633},
doi={10.1109/IROS45743.2020.9340777}}
}
- numpy
- opencv-python
- matplotlib
- scikit-image
- imageio
- torch
- torchvision
- torchsummary
- tensorboardX
- pyrealsense2
- Pillow
- Checkout the robotic grasping package
$ git clone https://github.com/skumra/robotic-grasping.git- Create a virtual environment
$ python3.6 -m venv --system-site-packages venv- Source the virtual environment
$ source venv/bin/activate- Install the requirements
$ cd robotic-grasping
$ pip install -r requirements.txtThis repository supports both the Cornell Grasping Dataset and Jacquard Dataset.
- Download the and extract Cornell Grasping Dataset.
- Convert the PCD files to depth images by running
python -m utils.dataset_processing.generate_cornell_depth <Path To Dataset>
- Download and extract the Jacquard Dataset.
A model can be trained using the train_network.py script. Run train_network.py --help to see a full list of options.
Example for Cornell dataset:
python train_network.py --dataset cornell --dataset-path <Path To Dataset> --description training_cornellExample for Jacquard dataset:
python train_network.py --dataset jacquard --dataset-path <Path To Dataset> --description training_jacquard --use-dropout 0 --input-size 300The trained network can be evaluated using the evaluate.py script. Run evaluate.py --help for a full set of options.
Example for Cornell dataset:
python evaluate.py --network <Path to Trained Network> --dataset cornell --dataset-path <Path to Dataset> --iou-evalExample for Jacquard dataset:
python evaluate.py --network <Path to Trained Network> --dataset jacquard --dataset-path <Path to Dataset> --iou-eval --use-dropout 0 --input-size 300A task can be executed using the relevant run script. All task scripts are named as run_<task name>.py. For example, to run the grasp generator run:
python run_grasp_generator.pyTo run the grasp generator with a robot, please use our ROS implementation for Baxter robot. It is available at: https://github.com/skumra/baxter-pnp