Skip to content

ran2207/chrome-dino-ai

Repository files navigation

Chrome Dino Game AI Bot

An AI-powered bot that plays the Chrome Dinosaur Game. Includes a perfect deterministic bot that plays indefinitely, plus reinforcement learning (Q-learning) bots for experimentation. Uses Selenium for browser automation.


Bots Overview

Bot File Approach Learns? Performance
Perfect Bot dino_bot_perfect.py Physics-based rules No (doesn't need to) Near-unbeatable
Basic Bot dino_bot.py Adaptive thresholds Yes (JSON) Good after tuning
Q-Learning Bot dino_bot_ml.py Reinforcement learning Yes (pickle) Improves over episodes
Q-Learning Headless dino_bot_ml_headless.py RL + headless Chrome Yes (pickle) Fastest training

Perfect Bot (dino_bot_perfect.py)

The star of the show. Uses precise game-state reading and physics-based calculations to make optimal decisions:

  • Speed-adaptive jump timing — calculates exact reaction distance based on current game speed
  • Obstacle type awareness — jumps over cacti, ducks under low pterodactyls, runs under high ones
  • Multi-obstacle lookahead — checks if a second obstacle is close behind to adjust timing
  • Fast-fall mechanic — presses DOWN while airborne to descend faster when needed
  • 125 Hz polling — reads game state ~125 times/second for precise reactions
python3 dino_bot_perfect.py              # visible mode
python3 dino_bot_perfect.py --headless   # headless mode

How It Decides

For each game tick (~8ms):
  1. Read exact obstacle positions, types, dino state from Runner.instance_
  2. Nearest obstacle is a cactus?
     → Jump when obstacle is within speed-dependent reaction distance
  3. Nearest obstacle is a low pterodactyl (yPos >= 70)?
     → Duck and hold until it passes
  4. Nearest obstacle is a high pterodactyl (yPos < 60)?
     → Do nothing — run right under it
  5. Two obstacles close together?
     → Adjust jump timing to clear both

Basic Bot (dino_bot.py)

Adaptive threshold-based bot that learns from crashes:

  • Maintains separate jump and duck thresholds per speed range
  • Identifies obstacle types (cactus vs pterodactyl) and reacts accordingly
  • Adjusts thresholds based on crash feedback (jumped too early/late)
  • Saves progress to learning_data.json
python3 dino_bot.py

Q-Learning Bots (dino_bot_ml.py / dino_bot_ml_headless.py)

Reinforcement learning bots with improved state representation:

  • 5 speed categories (finer than original 3) for better discrimination
  • Dino action state (running/jumping/ducking) prevents invalid actions
  • Obstacle height categories encode the correct response (jump/duck/ignore)
  • Action masking — can't jump while already jumping
  • Streak bonuses — extra reward for consecutive obstacle passes
  • Saves Q-table to learning_data.pkl for continuous improvement
python3 dino_bot_ml.py            # visible mode (50 episodes)
python3 dino_bot_ml_headless.py   # headless mode (faster training)

RL Hyperparameters

Parameter Value Purpose
Alpha 0.15 Learning rate
Gamma 0.95 Discount factor
Epsilon start 1.0 Initial exploration
Epsilon decay 0.995 Per-episode decay
Epsilon min 0.02 Minimum exploration

Requirements

Prerequisites

  1. Python 3.8+
  2. Google Chrome (latest version)
  3. ChromeDriver matching your Chrome version

Install Dependencies

git clone https://github.com/yourusername/chrome-dino-ai.git
cd chrome-dino-ai
pip install -r requirements.txt

How It Works

All bots interact with the Chrome Dino game through Selenium WebDriver and JavaScript injection:

  1. Open the game at https://chromedino.com/ or https://elgoog.im/t-rex/
  2. Read game state from Runner.instance_ (obstacles, speed, dino position, crash status)
  3. Decide action based on bot logic (rules or Q-learning)
  4. Execute via keyboard input (SPACE = jump, ARROW_DOWN = duck)
  5. Repeat at high frequency (8-15ms per tick)

Game State Available

Runner.instance_ = {
    crashed: bool,           // collision detected
    currentSpeed: float,     // game speed (increases over time)
    tRex: {
        jumping: bool,       // dino is mid-jump
        ducking: bool,       // dino is ducking
        yPos: float,         // vertical position
        xPos: float          // horizontal position
    },
    horizon: {
        obstacles: [{
            xPos: float,     // distance ahead
            yPos: float,     // height (lower = higher on screen)
            width: float,    // obstacle width
            typeConfig: { type: string }  // CACTUS_SMALL, CACTUS_LARGE, PTERODACTYL
        }]
    }
}

Troubleshooting

Runner is not defined

  • Ensure the game URL loads correctly
  • The bot will automatically retry alternate URLs

selenium.common.exceptions.WebDriverException

  • Ensure ChromeDriver version matches your Chrome browser
  • Add ChromeDriver to your PATH

Bot crashes frequently

  • Use dino_bot_perfect.py for near-unbeatable play
  • For Q-learning bots, run more episodes — they improve over time

License

MIT License

About

An AI-powered bot that learns to master the Chrome Dino Game using Q-learning and automation with Selenium. Supports both visible and headless training modes for faster, adaptive learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages