Replies: 2 comments
-
Thank you for posting this. Typically, a direct workflow will provide more flexibility to implement such an environment. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Direct workflow makes sense but I think this is similar to #1607 ? Would it make more sense to use warp for this or does Isaac Lab already allow for these kinds of predictions (using somehow |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey all,
I was trying to use an actuator as a "prediction" or the output of the NN. I have 4 control points that I want to predict, the input layer would be a depth camera, using the CartpoleDepthCameraEnvCfg as a basis. Ground truth being the known control points which would act as the reward function. So greater the distance between current and prediction greater the error, its a continuous action space so planning on using Twin Delayed DDPG/TD3 for this.
I had planned to use the actuators as the prediction using 12 DOF on prismatic joints XYZ for each along each axis. As I would need to use rigid bodies to connect the joints I dont think this is the right way to go as its only a prediction in space and not a physical object.
An example would be 4 prims lerping continuously between random 3D points, each prims current position would be the ground truth the observation is the depth camera array. I want to predict the continuous position of each prim.
How could I make the predictions in isolation rather than an articulation command?
Beta Was this translation helpful? Give feedback.
All reactions