-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Hello, I am currently working on a project which involves stimulating and coordinating two UR5 in ROS/Gazebo for a pick and place task i.e. (the first robot picking up an object from a given position and moving to a certain position and the second robot moving to that position, taking the object from the first robot and going to the goal position). I am using Robotiq85 grippers for picking up an object till now I am successfully able to perform this task by hard-coding it, you can find the video here (https://www.youtube.com/watch?v=n6Vk9lIxKkg) but I want to perform this task using PPO for which I need to create an environment such that when the agent takes an action that actions is performed into the simulated world using Moveit Python interface (library which is used to control motions to the robot in Gazebo-simulated world). Your work seems to be a bit promising for making a baseline for my project but my question is since you used discrete states and actions the moveit was able to perform the simulation but in my case where I have continuous states and actions will moveit commander interface be fast to simulate the actions in Gazebo?