Skip to content

Stuck at Local Minimum in PPO with CarRacing-v2 Environment  #87

@bantu-4879

Description

@bantu-4879

I've been experimenting with various parameters in the Proximal Policy Optimization (PPO) algorithm within the CarRacing-v2 environment. After extensive testing, I've found a combination of parameters that initially shows promising results and learns relatively fast. However, I've encountered a significant challenge where the learning process appears to stagnate after a certain training stage.

Despite extensive training, the agent seems unable to surpass a particular performance threshold. I suspect that the algorithm may be trapped in a local minimum, but it doesn't seem to be a desirable or acceptable minimum given the potential of the environment.

Request for Assistance:
I'm seeking guidance on how to overcome this challenge and help the algorithm escape from the local minimum it's currently stuck in. Any insights, suggestions, or alternative approaches would be greatly appreciated. @simoninithomas

Environment and Configuration:

  • Environment: CarRacing-v2
  • Algorithm: Proximal Policy Optimization (PPO)

My Work
https://github.com/bantu-4879/Atari_Games-Deep_Reinforcement_Learning/tree/main/Notebooks/CarRacing-v2

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions