Skip to content

AI coding loop orchestrator with hexagonal architecture - supports Claude, Gemini, Codex backends

License

Notifications You must be signed in to change notification settings

joserprieto/ralphy-looper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Ralphy Looper

CI Python 3.11+ License: MIT Typed Coverage

Run AI coding assistants in automated loops with crash recovery, health monitoring, and multi-phase orchestration.

"I choo-choo-choose you!" — Ralph Wiggum

Documentation | Architecture | Conventions | Roadmap | Contributing

Ralph solves the problem of manually babysitting AI coding sessions. Instead of watching Claude, Gemini, or Codex work and restarting them when they stall, Ralph runs them in a loop — detecting completion signals, handling errors with exponential backoff, and persisting state so you can resume after interruptions.

Who is this for?

  • Developers using the Ralph Wiggum technique for AI-assisted coding
  • Teams running long-running AI coding sessions that need supervision
  • Anyone who wants to automate multi-phase AI workflows with different prompts

Quick example

# Scaffold a project
ralph init my-project && cd my-project

# Edit your prompt, then run
ralph run ralph.yaml
============================================================
  Ralph Loop - Main Phase
============================================================

Backend: claude
Prompt: prompts/phase-01-main.md
Max iterations: 30

>>> Iteration 1 [phase-01]
>>> Iteration 2 [phase-01]
Completed: phase-01

Loop completed successfully

Installation

git clone https://github.com/joserprieto/ralphy-looper.git
cd ralphy-looper
poetry install

Quick Start

# Initialize new project
ralph init my-project

# Validate configuration
ralph run ralph.yaml --dry-run

# Run orchestrator
ralph run ralph.yaml

# Check state
ralph status ralph.yaml

# Show version
ralph version

Configuration

version: "1.0.0"

project:
  name: "project-name"
  description: "description"

orchestrator:
  max_iterations: 30
  completion_signal: "LOOP_COMPLETE"
  phase_completion_signal: "PHASE_COMPLETE"

backend:
  name: "claude"  # claude | gemini | codex

phases:
  - id: "phase-01"
    name: "Phase Name"
    prompt: "prompts/phase-01.md"
    max_iterations: 10

Backends

Backend Command Default Flags Prompt Mode
claude claude -p --dangerously-skip-permissions stdin
gemini gemini --yolo stdin
codex codex --full-auto file

Testing Status

Ralph has been primarily tested with Claude Code (Anthropic) as both executor and verification agent. Testing with other backends:

Backend As Executor As Verifier Notes
Claude Tested Tested Primary backend. Recommended.
Gemini Tested Tested Works well as executor and verifier.
Codex Limited Not working Produces ~32 chars output as verifier, never emits verification markers.

Recommendation: Use Claude Code as executor. For verification, use Claude + Gemini with majority consensus. See ROADMAP.md for Codex status.

Exit Codes

Code Meaning
0 Success (completion signal detected)
1 Max iterations/runtime reached
2 Configuration error
3 Pre-flight check failed
4 Too many consecutive errors
5 Health check abort
130 Interrupted (SIGINT)

Architecture

Ralph follows Hexagonal Architecture (Ports & Adapters):

ralph/
├── ports/              # Contracts (interfaces)
│   ├── backend.py      # BackendPort protocol
│   ├── config.py       # ConfigPort + types
│   ├── state.py        # StatePort protocol
│   ├── logging.py      # IterationLogPort protocol
│   ├── status.py       # HealthStatus, LoopStatus
│   └── errors.py       # Domain errors
├── adapters/           # Implementations
│   ├── backends/       # Claude, Gemini, Codex
│   ├── config/         # YAML loader (ACL)
│   ├── state/          # File state manager
│   ├── logging/        # File iteration logger
│   └── cli/            # Typer application
└── app/                # Application layer
    ├── orchestrator.py  # Async orchestrator
    └── _mixin.py        # Shared orchestrator logic

Documentation

Document Description
docs/usage.md Complete user guide
docs/architecture/ Architecture Decision Records (ADRs)
docs/conventions/ Development conventions
ROADMAP.md Planned features and known issues
CONTRIBUTING.md Contribution guidelines

Development

make install          # Install dependencies
make qa               # Run all quality checks (lint + typecheck + test)
make qa/pre-push      # Full CI pipeline locally
make test             # Run all tests
make test/unit        # Unit tests only
make test/contract    # Contract tests
make test/smoke       # Smoke tests
make test/cov         # Tests with coverage
make lint             # Run ruff linter
make typecheck        # Run mypy (strict)
make format           # Format code

Contributing

Contributions welcome! Please read CONTRIBUTING.md before submitting PRs.

License

MIT License - see LICENSE for details.

Acknowledgments

  • Geoffrey Huntley — Creator of the Ralph Wiggum technique that inspired this project
  • Typer — CLI framework
  • Rich — Terminal formatting
  • Poetry — Dependency management and packaging
  • PyYAML — YAML configuration parsing

Built with the assistance of Claude (Anthropic), Gemini (Google), and Codex (OpenAI).

Claude, Gemini, and Codex are trademarks of their respective owners. This project is not affiliated with or endorsed by Anthropic, Google, or OpenAI.