Skip to content

smartflame17/ClueChain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Four Pyres, One Pardon (Project ClueChain)

Project ClueChain is an experimental Unity detective game that integrates a locally executed Large Language Model (LLM) as a core gameplay system.
NPCs dynamically generate dialogue based on player questions, collected evidence, and narrative context—without relying on cloud APIs or scripted dialogue trees.

The project explores the feasibility and limitations of on-device LLMs in interactive narrative games, developed by a small team over a short timeframe.


Project Summary

  • Genre: Dark comedy / satire detective game
  • Engine: Unity 6.2
  • AI Core: Local LLM (llama.cpp)
  • Team Size: 4
  • Duration: ~3 months

Set in a bureaucratic version of hell, the player interrogates five suspects and must identify the single innocent character.
Each playthrough features dynamically generated dialogue, relationships, and narrative context.


Key Technologies & Methods

Local LLM Integration

  • Model: Gemma 3 12B (quantized)
  • Runtime: llama.cpp
  • Fully offline execution (no external APIs or servers)
  • GPU-accelerated local inference

This approach avoids network latency, API costs, and privacy concerns while testing the real-world viability of local LLMs in games.


Asynchronous Unity ↔ LLM Architecture

  • C# async/await–based communication
  • LLM inference does not block the game loop
  • Centralized LLMManager handles prompt construction, inference, and output parsing

Single-Model, Multi-NPC System

  • One LLM instance is shared across all NPCs
  • NPC identity is switched via dynamic system prompts
  • Each NPC maintains an isolated dialogue history

This significantly reduces memory usage while preserving personality consistency.


Hierarchical Prompt Design

NPC behavior is controlled through layered prompts:

  1. Shared interrogation rules
  2. Character-specific data (personality, guilt/innocence, secrets)
  3. Dynamically generated story context

Prompt data is managed using Unity ScriptableObjects for easy iteration and version control.


Dynamic Story Context Generation

  • At game start, the LLM generates a unified narrative connecting characters, events, and relationships
  • Identical base data can produce different narrative structures each run
  • Improves replayability without requiring handcrafted branching stories

Cooperation (Interrogation Pressure) System

  • NPC responses adapt based on player questioning style and evidence usage
  • A separate evaluation LLM analyzes player input and adjusts cooperation level
  • Prevents dialogue context corruption and improves character consistency

Higher cooperation yields clearer, more truthful answers; lower cooperation results in evasive or misleading responses.


Evidence-Driven Dialogue

  • Collected evidence is injected directly into NPC prompts
  • NPC statements change depending on what evidence is known or presented
  • Includes intentionally misleading “dummy evidence” to force player reasoning

Key Problems Addressed

  • Local LLM performance limits on consumer hardware
  • Memory constraints when handling multiple AI-driven NPCs
  • Dialogue consistency and personality drift
  • Narrative replayability with limited authored content
  • Small-team production constraints

These challenges were addressed through prompt layering, task-separated LLM roles, modular architecture, and AI-assisted content generation.


Design Philosophy

The LLM is treated as a core game system, not a chatbot.
Ambiguity, contradictions, and partial truths are intentionally embraced to mirror real interrogation dynamics and enhance player deduction.


Disclaimer

This is an experimental academic project, focused on exploration and system design rather than production-scale optimization.

About

Using Unity 6.2 (6000.2.8f1) version

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •