-
Notifications
You must be signed in to change notification settings - Fork 141
Description
Hello,
Thanks for the project, it looks awesome.
I've been trying to use Stable-Baselines3 on it (we created a fork to register the gym env: https://github.com/osigaud/rex-gym)
and could train an agent on it, however after training or when using a second env for testing, we could not reproduce the results.
Do you know what can change between two instantiation of the environment?
It seems that the observation provided to the agent is somehow quite different to the one seen during training.
(we are testing simple walk forward on a plane)
I'm using the RL Zoo to train the agent (and remove any kind of mistake from my part). It works with other pybullet envs perfectly (e.g. with HalfCheetahBulletEnv-v0) but not with rex-gym :/
Additionally, it seems that the env is not deterministic, could you confirm? And do you know why?
PS: if needed I can provide a minimal example to reproduce the issue