[Question] How do I deal with uncertainty when deploying rl policies (rsl_rl)? #3816
Unanswered
celestialdr4g0n
asked this question in
Q&A
Replies: 1 comment
-
|
Thank you for posting this. I'll move this post to our Discussions for follow up. The grasp inconsistency you observe is likely caused by minor discrepancies between the Isaac Lab training environment and the Isaac Sim deployment settings. To mitigate you may try the following:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Question
Library: rsl_rl
Robot: FR3; have the same dimensions with Franka
The robot is fixed and the rest parts of the robot was hidden because of annoying policies.
Data pipeline: Isaac Sim (action graphs) <-> ROS + policy
Isaac Sim version 5.0
IsaacLab version 2.2.1
I am controlling an FR3 robot with policy trained using rsl_rl. The policy is obtain from logs/.../export after run the play.py script (policy.pt and policy.onxx)
Even though I use ground truth information and the object position was the same. Sometimes, the robot could not lift the object. I attached a video here.
So, how do I deal with this uncertainty?
policy_deploy_uncertain.mp4
Beta Was this translation helpful? Give feedback.
All reactions