[Question] Is there a way to record the actor coordinates at each step and use them in the terrain curriculum? #2691
Replies: 6 comments 3 replies
-
|
Thank you for starting this discussion. To accurately compute the total traveled distance for an actor in such way, probably the best is to implement a custom manager that caches positions during the episode and calculates the distance at episode end. Here's a step-by-step solution you may try: 1. Create a Custom Manager ClassImplement a manager to record positions at each step and compute distances at episode end: import torch
from omni.isaac.lab.managers import ManagerTermBase
class TraveledDistanceRecorder(ManagerTermBase):
def __init__(self, config, env):
super().__init__(config, env)
self._env = env
self._num_envs = env.num_envs
self.device = env.device
# Buffers to store positions and distances
self._position_cache = [[] for _ in range(self._num_envs)]
self._episode_travel_dist = torch.zeros(self._num_envs, device=self.device)
def reset(self, env_ids):
# Reset cache and distance for specified environments
for idx in env_ids:
self._position_cache[idx] = []
self._episode_travel_dist[idx] = 0.0
def pre_physics_step(self):
# Record positions at the start of each physics step
current_pos, _ = self._env.actors.get_world_poses()
for i in range(self._num_envs):
self._position_cache[i].append(current_pos[i].clone())
def post_episode(self):
# Calculate total distance at episode end
for env_idx in range(self._num_envs):
if len(self._position_cache[env_idx]) < 2:
continue
positions = torch.stack(self._position_cache[env_idx])
deltas = positions[1:] - positions[:-1]
distances = torch.norm(deltas, dim=1)
self._episode_travel_dist[env_idx] = torch.sum(distances)
def get_episode_travel_dist(self, env_ids=None):
# Retrieve distances for curriculum manager
if env_ids is None:
return self._episode_travel_dist.clone()
return self._episode_travel_dist[env_ids].clone()2. Integrate with the EnvironmentAdd the manager to your environment setup: class MyEnvironment(RLTaskEnv):
def _setup_managers(self):
super()._setup_managers()
self.add_manager(
"traveled_distance_recorder",
TraveledDistanceRecorder,
cfg={},
env=self
)3. Connect to Curriculum ManagerAccess the distance data from the curriculum manager: class TerrainCurriculumManager:
def update_curriculum(self, env_ids):
dist_recorder = self._env.managers["traveled_distance_recorder"]
travel_distances = dist_recorder.get_episode_travel_dist(env_ids)
# Use distances for curriculum updates
self._adjust_difficulty(travel_distances)Key Implementation Notes:
Usage Workflow:
This solution bypasses
|
Beta Was this translation helpful? Give feedback.
-
|
Hello @RandomOakForest, I have a question about implementing the code you suggested.
|
Beta Was this translation helpful? Give feedback.
-
|
I'm also on this, getting close, will update when done. @H-Hisamichi |
Beta Was this translation helpful? Give feedback.
-
|
@H-Hisamichi @RandomOakForest I cannot find _setup_managers nor add_manager in the source codes, none of the manager environment classes have it, these steps seem wrong. |
Beta Was this translation helpful? Give feedback.
-
|
I do not suggest following RandomOakTree's approach, but there is a way to do this: then put this in the gym registry, such as this: With this setup you can make your own managers that have their manager methods auto called by the runner (remember to put custom manager methods in the necessary wrappers, my example should be good for reference), and the curriculum manager can use functions that use these custom managers just fine, so long as you get the other basic grammar right. @H-Hisamichi If you need more details for reference feel free to ask away, I won't write too much more here for sake of clarity. |
Beta Was this translation helpful? Give feedback.
-
|
The key takeaway is to have gym use your own environment as entry point, after which you can use inheritance super call to do whatever the library modules do first, then add your own stuff. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
In the initial implementation of the terrain curriculum in Isaac Lab, the distance moved by the actor is computed using the norm between the starting point and the final position at the end of the episode.
However, this approach can be inaccurate when the actor moves in an irregular path.
To address this, I would like to record the actor's position at every learning steps and calculate the total traveled distance by summing the norms of the position differences between each step within an episode.
However, it seems that
RecorderManageronly allows access to position data after training is complete.Is there a way to record actor positions at each step and access that data at the end of the episode?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions