A custom OpenAI Gym environment and Pygame GUI for simulating Clash of Clans–style attack mechanics, combined with a Proximal Policy Optimization (PPO) agent (via Stable-Baselines3) to learn optimal attacking strategies.
-
Environment: A Gym-compatible environment (
WarzoneEnv) that models base layouts, troop deployment, and reward functions for RL. -
GUI: A Pygame + pygame_gui application (
app.py) with screens for:- Base design
- Troop selection
- Attack simulation (where the agent plays)
-
Agent: A PPO-based reinforcement learning agent implemented in
model.pyusing Stable-Baselines3.
-
Clone this repo:
git clone https://github.com/your-org/ClashGymEnv.git cd ClashGymEnv -
(Optional) Create and activate a virtual environment:
python3 -m venv venv source venv/bin/activate # macOS / Linux venv\Scripts\activate.bat # Windows
-
Install dependencies:
pip install -r requirements.txt
-
Open
model.pyand configure your hyperparameters (learning rate, timesteps, etc.). -
Run the training script:
python model.py --train --timesteps 1000000
-
After training completes, a file
ppo_model.zip(or similar) will be saved in the project root.
With a trained model in place, launch the Pygame application:
python app.py- Main Screen: Select your Town Hall level.
- Base Design: Lay out defenses and buildings.
- Troop Selection: Choose your deck composition.
- Attack Simulation: Watch the PPO agent deploy troops and execute its learned strategy.
-
Create a new agent file in
agents/(e.g.,q_learning.py). -
Implement an
Agentclass with methods:select_action(state)train(env, episodes)save(path)/load(path)
-
In
ui_attack_screen.py, import and instantiate your agent alongside the existing PPO agent.
See LICENSE