Skip to content

Issue with multiple envs and determinism #27

@araffin

Description

@araffin

Hello,

Thanks for the project, it looks awesome.

I've been trying to use Stable-Baselines3 on it (we created a fork to register the gym env: https://github.com/osigaud/rex-gym)
and could train an agent on it, however after training or when using a second env for testing, we could not reproduce the results.

Do you know what can change between two instantiation of the environment?
It seems that the observation provided to the agent is somehow quite different to the one seen during training.
(we are testing simple walk forward on a plane)

I'm using the RL Zoo to train the agent (and remove any kind of mistake from my part). It works with other pybullet envs perfectly (e.g. with HalfCheetahBulletEnv-v0) but not with rex-gym :/

Additionally, it seems that the env is not deterministic, could you confirm? And do you know why?

PS: if needed I can provide a minimal example to reproduce the issue

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions