Skip to content

erosika/pokemon-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pokemon LLM

Talk to AI-powered NPCs in Pokemon games using local LLMs.

Press SELECT in-game to speak with mysterious characters who respond dynamically using a local language model. No internet required, no API keys, fully offline.

How It Works

┌─────────────┐      ┌─────────────┐      ┌─────────────┐
│   mGBA      │      │   bridge.py │      │ llama.cpp   │
│  Emulator   │─────▶│   Python    │─────▶│   Server    │
│  + Lua      │◀─────│   Bridge    │◀─────│   (LLM)     │
└─────────────┘      └─────────────┘      └─────────────┘
   IPC via files           HTTP to localhost:8080

The Lua script captures button presses in mGBA, writes player messages to a file, and the Python bridge sends them to a local llama.cpp server. Responses come back through the same pipeline.

Characters

Three NPCs with distinct personalities:

  • Professor Hemlock — Cryptic researcher who speaks in riddles about legendary Pokemon and ancient bonds
  • Vera — Pattern-obsessed astrologer who sees connections everywhere, casts horary charts, and mutters about what the Unown are really spelling
  • Iris — Ageless starseed who attracts synchronicities, sees angel numbers, and leaves trainers feeling lighter

Characters are selected based on keywords in your message, or randomly if no keywords match.

Requirements

  • macOS (tested on Apple Silicon)
  • llama.cpp with a GGUF model
  • mGBA emulator with Lua scripting support
  • Python 3 with requests
  • Pokemon ROM (Crystal or Emerald)

Quick Start

1. Install llama.cpp and download a model

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && cmake -B build && cmake --build build -j
mkdir models && cd models
# Download TinyLlama (fast, ~637MB) or a larger model for better quality
curl -LO https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf

2. Clone this repo

git clone https://github.com/erosika/pokemon-llm
cd pokemon-llm

3. Set up Python environment

python3 -m venv .venv
.venv/bin/pip install requests

4. Install mGBA

brew install mgba

5. Get a Pokemon ROM

Build from source using pret/pokecrystal or pret/pokeemerald, or use your own legally obtained ROM.

mkdir roms
# Place your .gbc or .gba file in the roms folder

6. Update paths in scripts

Edit run-server.sh, run-bridge.sh, and run-emulator.sh to match your local paths.

Running

Open three terminal windows:

Terminal 1 — LLM Server:

./run-server.sh

Terminal 2 — Python Bridge:

./run-bridge.sh

Terminal 3 — Emulator:

./run-emulator.sh

Press SELECT in-game to talk to the LLM. Responses appear in the mGBA console.

Supported Games

  • Pokemon Crystal (Game Boy Color)
  • Pokemon Emerald (Game Boy Advance) — in progress

Model Recommendations

Model Size Speed Quality
TinyLlama 1.1B ~637MB Very fast Chaotic but fun
Llama 3.2 3B ~2GB Fast Good
Mistral 7B ~5GB Medium Great
Llama 3.1 8B ~6GB Slower Excellent

Smaller models run faster but produce more erratic responses. Larger models need more RAM.

Future Ideas

  • Display responses in actual game text boxes
  • NPCs that appear in the overworld
  • Location-aware dialog
  • Party Pokemon awareness
  • Quest generation based on game state

License

MIT

About

experiments for integrating llms into gba roms

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors