Replies: 2 comments 2 replies
-
I suppose what I am looking for is something similar to this tutorial (modifying a direct RL environment) but for a manager based RL environment instead |
Beta Was this translation helpful? Give feedback.
1 reply
-
Thanks for starting this discussion. That is correct, the tutorial on Modifying an existing Direct RL environment is the one to follow. Let us know if you need further help. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I currently have written an RL algorithm that is effectively training a policy to complete the Isaac-Reach-Franka-v0 task/environment (for example).
I want to now switch to using a Kinova arm for the same task. I was wondering the easiest way to go about doing this? Is it as simple as changing the USD that is loaded, or is there a lot of custom code to be written for the task itself?
I have spent much time working on the algorithmic side of things, and less so working with IsaacLab in this capacity, so I would be very grateful for any guidance or advice.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions