Is it possible to create a custom discrete action to teleport a specific object at each step during RL(RSL-RL or SKRL) training #1953
Unanswered
JeonHaneul
asked this question in
Q&A
Replies: 1 comment
-
Thanks for posting this. We don't have a tutorial on this yet, but you could review this code to get started. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to train a policy using RSL-RL or SKRL in IsaacLab to learn the optimal sequence of removing spawned objects in order to find a hidden target object. However, it seems that the action functions and examples provided by IsaacLab mainly involve continuous action spaces where the robot interacts with the environment step by step.
I have a few questions regarding this:
Figure 1
Figure 2
In the existing IsaacLab examples and action implementations, as shown in Figure 1, the target object continuously moves toward the goal step by step. However, my goal is different. As shown in Figure 2, I want to implement an action that teleports specific objects to a designated location at each step—without an agent like a robot physically interacting with the environment.
From the example codes and action implementations I have reviewed, I am unsure whether this is possible. Specifically, I would like to receive RGB images and object poses at each step, process them, and then apply teleportation actions accordingly. However, I am uncertain how to approach this within IsaacLab’s framework.
Is it possible to implement such teleportation-based actions in IsaacLab? If so, how should I modify the existing action functions to achieve this? Any guidance or example references would be greatly appreciated.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions