Replies: 1 comment
-
Thanks for posting this discussion. You bring up very good points. The team is reviewing how to follow up in the best way possible. We will reply soon. Thank you for your interest in Isaac Lab. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Manager-based Multi-agent environment
I'm implementing a multi-agent RL environment in IsaacLab using the Manager-Based Approach. Currently, I duplicate assets based on the number of agents and replicate corresponding configurations for observations, actions, rewards, and termination.
Current Implementation
For example:
For observations, I currently duplicate observation groups, modifying them per agent:
Current Issue
While this setup allows me to obtain observations in a structured dictionary format, handling actions, rewards, and terminations as single vectors is problematic. Ideally, these should also support grouping mechanisms similar to observation groups. Specifically:
Proposed Solution
A recent pull request (#729) introduces a similar concept for reward grouping. Extending this to terminations and actions would significantly improve the manager-based multi-agent setup.
I propose:
Would love to hear your thoughts and insights!
Beta Was this translation helpful? Give feedback.
All reactions