Skip to content

ndolphin-github/VisionTactileSim_Mujoco

Repository files navigation

Contact Task Simulation with UR5e and DIGIT Sensors

A MuJoCo-based simulation environment for contact-rich robotic manipulation tasks, featuring a UR5e robotic arm equipped with DIGIT tactile sensors for precision peg-in-hole operations.

Overview

This repository provides a focused simulation framework for contact-rich manipulation with tactile feedback. The core system includes:

  • UR5e robotic arm with 6-DOF inverse kinematics
  • RH-P12-RN gripper with parallel jaw actuation
  • DIGIT tactile sensors with high-resolution contact detection
  • Physics simulation using MuJoCo for realistic contact dynamics
  • Interactive teleoperation with sensor data recording
  • Reinforcement Learning environment for peg-in-hole tasks

Key Features

Tactile Sensing

  • High-resolution contact detection with 2552-node FEM grid per sensor
  • Proximity-based sensing with configurable thresholds (default: 0.8mm)
  • Real-time data logging for research and analysis

Robotic Control

  • Robust inverse kinematics using Levenberg-Marquardt optimization
  • Task-space control with position and orientation targets
  • Smooth motion execution with joint interpolation

Simulation Environment

  • Realistic physics with contact friction and dynamics
  • Hexagon peg-in-hole manipulation task
  • Interactive teleoperation with keyboard controls

Machine Learning

  • Gymnasium-compatible RL environment for automated learning
  • Multi-phase task decomposition (approach, align, insert, release)
  • PPO training with tactile feedback rewards

Installation

Prerequisites

  • Python 3.8+
  • NVIDIA GPU (recommended for RL training)
  • Windows/Linux/macOS

Dependencies

# Install all dependencies
pip install -r requirements.txt

Core packages:

  • mujoco>=3.0.0 - Physics simulation
  • numpy>=1.24.0 - Numerical computing
  • scipy>=1.10.0 - Scientific computing
  • matplotlib>=3.6.0 - Visualization
  • gymnasium>=0.29.0 - RL environment
  • stable-baselines3>=2.0.0 - RL algorithms
  • torch>=1.13.0 - Deep learning
  • wandb>=0.15.0 - Experiment tracking

GPU Setup (for RL training)

For CUDA-enabled PyTorch:

# Check CUDA version: nvidia-smi
# Visit https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

Quick Start

1. Basic Simulation Demo

python simple_digit_demo.py

Runs a basic simulation showing the UR5e arm with DIGIT sensors detecting contact with a peg.

2. Interactive Teleoperation

python task_space_control_demo.py

Controls:

  • Position: W/S (X±), A/D (Y±), Q/E (Z±)
  • Rotation: I/K (Roll±), J/L (Pitch±), U/O (Yaw±)
  • Gripper: C (close), V (open)
  • Recording: R (toggle), M (snapshot), G (sensor status)
  • Utility: H (home), P (status), T (test grasp), X (exit)

3. Interactive Peg-in-Hole Demo

python hexagon_peg_interactive_v2.py

Demonstrates manual hexagon peg insertion with tactile feedback visualization.

4. RL Environment Test

python hexagon_peg_rl_env.py

Tests the reinforcement learning environment for automated peg insertion.

Repository Structure

ContactTasksSim/
├── README.md                               # This documentation
├── requirements.txt                        # Python dependencies
│
├── Core Simulation Files
├── simple_digit_demo.py                   # Basic DIGIT sensor demonstration
├── task_space_control_demo.py             # Interactive teleoperation with recording
├── ur5e_digit_demo.py                     # UR5e arm with DIGIT sensors demo
├── simple_ik_legacy.py                    # Robust inverse kinematics solver
├── gripper_digit_sensor.py                # DIGIT sensor implementation
├── modular_digit_sensor.py                # Alternative sensor configuration
│
├── Interactive Demos
├── hexagon_peg_interactive_v2.py          # Advanced peg-in-hole demo
├── hexagon_peg_interactive.py             # Basic interactive demo
├── move_to_ee_pose.py                     # End-effector positioning demo
│
├── Reinforcement Learning
├── hexagon_peg_rl_env.py                  # Gymnasium environment for RL
│
├── Core Modules
├── src/
│   ├── ik_module.py                       # Inverse kinematics utilities
│   ├── ur5e_simulator.py                  # Robot simulation core
│   ├── PID.py                             # PID controller
│   └── util.py                            # Utility functions
│
├── Assets & Data
├── filtered_FEM_grid.csv                  # 2552-node tactile sensor grid
├── ur5e_with_DIGIT_primitive_hexagon.xml  # Main simulation scene
├── assets/                                # 3D models and configurations
├── mesh/                                  # 3D mesh files
├── RH-P12-RN/                            # Gripper models
└── Teleoperation_sensor_data/             # Recorded sensor data

Configuration

DIGIT Sensor Parameters

Key sensor settings in gripper_digit_sensor.py:

PROXIMITY_THRESHOLD_MM = 0.8    # Contact detection distance
ROI_SIZE_MM = 15.0              # Sensor active area  
SENSING_PLANE_OFFSET_MM = 30.0  # Distance from gel surface

Robot Workspace

  • Joint ranges: Defined in XML configuration files
  • Reach radius: Approximately 850mm
  • End-effector: RH-P12-RN parallel gripper

Data Recording

The teleoperation system records comprehensive tactile data in CSV format:

Data Structure (5112 columns):

  • timestamp (1): Simulation time in seconds
  • gripper_value (1): Gripper opening (0.0-1.6)
  • joint1_rad...joint6_rad (6): Joint angles in radians
  • left_sensor_0...left_sensor_2551 (2552): Left DIGIT sensor distances
  • right_sensor_0...right_sensor_2551 (2552): Right DIGIT sensor distances

Usage:

  • Press 'R' in teleoperation mode to start/stop recording
  • Data saved to Teleoperation_sensor_data/session_YYYYMMDD_HHMMSS.csv

Research Applications

This simulation framework supports:

  • Tactile-guided manipulation research
  • Contact-rich task learning with RL
  • Sensor fusion for robotic perception
  • Real robot experiment validation

For questions or issues, please open a GitHub issue.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages