A MuJoCo-based simulation environment for contact-rich robotic manipulation tasks, featuring a UR5e robotic arm equipped with DIGIT tactile sensors for precision peg-in-hole operations.
This repository provides a focused simulation framework for contact-rich manipulation with tactile feedback. The core system includes:
- UR5e robotic arm with 6-DOF inverse kinematics
- RH-P12-RN gripper with parallel jaw actuation
- DIGIT tactile sensors with high-resolution contact detection
- Physics simulation using MuJoCo for realistic contact dynamics
- Interactive teleoperation with sensor data recording
- Reinforcement Learning environment for peg-in-hole tasks
- High-resolution contact detection with 2552-node FEM grid per sensor
- Proximity-based sensing with configurable thresholds (default: 0.8mm)
- Real-time data logging for research and analysis
- Robust inverse kinematics using Levenberg-Marquardt optimization
- Task-space control with position and orientation targets
- Smooth motion execution with joint interpolation
- Realistic physics with contact friction and dynamics
- Hexagon peg-in-hole manipulation task
- Interactive teleoperation with keyboard controls
- Gymnasium-compatible RL environment for automated learning
- Multi-phase task decomposition (approach, align, insert, release)
- PPO training with tactile feedback rewards
- Python 3.8+
- NVIDIA GPU (recommended for RL training)
- Windows/Linux/macOS
# Install all dependencies
pip install -r requirements.txtCore packages:
mujoco>=3.0.0- Physics simulationnumpy>=1.24.0- Numerical computingscipy>=1.10.0- Scientific computingmatplotlib>=3.6.0- Visualizationgymnasium>=0.29.0- RL environmentstable-baselines3>=2.0.0- RL algorithmstorch>=1.13.0- Deep learningwandb>=0.15.0- Experiment tracking
For CUDA-enabled PyTorch:
# Check CUDA version: nvidia-smi
# Visit https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121python simple_digit_demo.pyRuns a basic simulation showing the UR5e arm with DIGIT sensors detecting contact with a peg.
python task_space_control_demo.pyControls:
- Position: W/S (X±), A/D (Y±), Q/E (Z±)
- Rotation: I/K (Roll±), J/L (Pitch±), U/O (Yaw±)
- Gripper: C (close), V (open)
- Recording: R (toggle), M (snapshot), G (sensor status)
- Utility: H (home), P (status), T (test grasp), X (exit)
python hexagon_peg_interactive_v2.pyDemonstrates manual hexagon peg insertion with tactile feedback visualization.
python hexagon_peg_rl_env.pyTests the reinforcement learning environment for automated peg insertion.
ContactTasksSim/
├── README.md # This documentation
├── requirements.txt # Python dependencies
│
├── Core Simulation Files
├── simple_digit_demo.py # Basic DIGIT sensor demonstration
├── task_space_control_demo.py # Interactive teleoperation with recording
├── ur5e_digit_demo.py # UR5e arm with DIGIT sensors demo
├── simple_ik_legacy.py # Robust inverse kinematics solver
├── gripper_digit_sensor.py # DIGIT sensor implementation
├── modular_digit_sensor.py # Alternative sensor configuration
│
├── Interactive Demos
├── hexagon_peg_interactive_v2.py # Advanced peg-in-hole demo
├── hexagon_peg_interactive.py # Basic interactive demo
├── move_to_ee_pose.py # End-effector positioning demo
│
├── Reinforcement Learning
├── hexagon_peg_rl_env.py # Gymnasium environment for RL
│
├── Core Modules
├── src/
│ ├── ik_module.py # Inverse kinematics utilities
│ ├── ur5e_simulator.py # Robot simulation core
│ ├── PID.py # PID controller
│ └── util.py # Utility functions
│
├── Assets & Data
├── filtered_FEM_grid.csv # 2552-node tactile sensor grid
├── ur5e_with_DIGIT_primitive_hexagon.xml # Main simulation scene
├── assets/ # 3D models and configurations
├── mesh/ # 3D mesh files
├── RH-P12-RN/ # Gripper models
└── Teleoperation_sensor_data/ # Recorded sensor data
Key sensor settings in gripper_digit_sensor.py:
PROXIMITY_THRESHOLD_MM = 0.8 # Contact detection distance
ROI_SIZE_MM = 15.0 # Sensor active area
SENSING_PLANE_OFFSET_MM = 30.0 # Distance from gel surface- Joint ranges: Defined in XML configuration files
- Reach radius: Approximately 850mm
- End-effector: RH-P12-RN parallel gripper
The teleoperation system records comprehensive tactile data in CSV format:
Data Structure (5112 columns):
timestamp(1): Simulation time in secondsgripper_value(1): Gripper opening (0.0-1.6)joint1_rad...joint6_rad(6): Joint angles in radiansleft_sensor_0...left_sensor_2551(2552): Left DIGIT sensor distancesright_sensor_0...right_sensor_2551(2552): Right DIGIT sensor distances
Usage:
- Press 'R' in teleoperation mode to start/stop recording
- Data saved to
Teleoperation_sensor_data/session_YYYYMMDD_HHMMSS.csv
This simulation framework supports:
- Tactile-guided manipulation research
- Contact-rich task learning with RL
- Sensor fusion for robotic perception
- Real robot experiment validation
For questions or issues, please open a GitHub issue.