An AI-powered bot that plays the Chrome Dinosaur Game. Includes a perfect deterministic bot that plays indefinitely, plus reinforcement learning (Q-learning) bots for experimentation. Uses Selenium for browser automation.
| Bot | File | Approach | Learns? | Performance |
|---|---|---|---|---|
| Perfect Bot | dino_bot_perfect.py |
Physics-based rules | No (doesn't need to) | Near-unbeatable |
| Basic Bot | dino_bot.py |
Adaptive thresholds | Yes (JSON) | Good after tuning |
| Q-Learning Bot | dino_bot_ml.py |
Reinforcement learning | Yes (pickle) | Improves over episodes |
| Q-Learning Headless | dino_bot_ml_headless.py |
RL + headless Chrome | Yes (pickle) | Fastest training |
The star of the show. Uses precise game-state reading and physics-based calculations to make optimal decisions:
- Speed-adaptive jump timing — calculates exact reaction distance based on current game speed
- Obstacle type awareness — jumps over cacti, ducks under low pterodactyls, runs under high ones
- Multi-obstacle lookahead — checks if a second obstacle is close behind to adjust timing
- Fast-fall mechanic — presses DOWN while airborne to descend faster when needed
- 125 Hz polling — reads game state ~125 times/second for precise reactions
python3 dino_bot_perfect.py # visible mode
python3 dino_bot_perfect.py --headless # headless modeFor each game tick (~8ms):
1. Read exact obstacle positions, types, dino state from Runner.instance_
2. Nearest obstacle is a cactus?
→ Jump when obstacle is within speed-dependent reaction distance
3. Nearest obstacle is a low pterodactyl (yPos >= 70)?
→ Duck and hold until it passes
4. Nearest obstacle is a high pterodactyl (yPos < 60)?
→ Do nothing — run right under it
5. Two obstacles close together?
→ Adjust jump timing to clear both
Adaptive threshold-based bot that learns from crashes:
- Maintains separate jump and duck thresholds per speed range
- Identifies obstacle types (cactus vs pterodactyl) and reacts accordingly
- Adjusts thresholds based on crash feedback (jumped too early/late)
- Saves progress to
learning_data.json
python3 dino_bot.pyReinforcement learning bots with improved state representation:
- 5 speed categories (finer than original 3) for better discrimination
- Dino action state (running/jumping/ducking) prevents invalid actions
- Obstacle height categories encode the correct response (jump/duck/ignore)
- Action masking — can't jump while already jumping
- Streak bonuses — extra reward for consecutive obstacle passes
- Saves Q-table to
learning_data.pklfor continuous improvement
python3 dino_bot_ml.py # visible mode (50 episodes)
python3 dino_bot_ml_headless.py # headless mode (faster training)| Parameter | Value | Purpose |
|---|---|---|
| Alpha | 0.15 | Learning rate |
| Gamma | 0.95 | Discount factor |
| Epsilon start | 1.0 | Initial exploration |
| Epsilon decay | 0.995 | Per-episode decay |
| Epsilon min | 0.02 | Minimum exploration |
- Python 3.8+
- Google Chrome (latest version)
- ChromeDriver matching your Chrome version
- Download from ChromeDriver Downloads
- Or install via package manager (e.g.,
brew install chromedriver)
git clone https://github.com/yourusername/chrome-dino-ai.git
cd chrome-dino-ai
pip install -r requirements.txtAll bots interact with the Chrome Dino game through Selenium WebDriver and JavaScript injection:
- Open the game at
https://chromedino.com/orhttps://elgoog.im/t-rex/ - Read game state from
Runner.instance_(obstacles, speed, dino position, crash status) - Decide action based on bot logic (rules or Q-learning)
- Execute via keyboard input (SPACE = jump, ARROW_DOWN = duck)
- Repeat at high frequency (8-15ms per tick)
Runner.instance_ = {
crashed: bool, // collision detected
currentSpeed: float, // game speed (increases over time)
tRex: {
jumping: bool, // dino is mid-jump
ducking: bool, // dino is ducking
yPos: float, // vertical position
xPos: float // horizontal position
},
horizon: {
obstacles: [{
xPos: float, // distance ahead
yPos: float, // height (lower = higher on screen)
width: float, // obstacle width
typeConfig: { type: string } // CACTUS_SMALL, CACTUS_LARGE, PTERODACTYL
}]
}
}- Ensure the game URL loads correctly
- The bot will automatically retry alternate URLs
- Ensure ChromeDriver version matches your Chrome browser
- Add ChromeDriver to your
PATH
- Use
dino_bot_perfect.pyfor near-unbeatable play - For Q-learning bots, run more episodes — they improve over time
MIT License