"The True is the whole." — G.W.F. Hegel
Hegelion applies dialectical reasoning to LLMs: forcing models to argue with themselves before reaching conclusions. This produces better reasoning for questions and better code for implementations.
Hegelion is prompt-driven by default: it generates structured prompts for your editor to execute, no API keys required. Optional server-side backends enable direct execution and independent coach verification.
Use it via MCP in Claude Desktop, Cursor, VS Code, or any MCP-enabled editor, or via the Python API in your own agents.
| Mode | Pattern | Use Case |
|---|---|---|
| Dialectical Reasoning | Thesis → Antithesis → Synthesis | Deep analysis of questions, philosophy, strategy |
| Autocoding | Player → Coach → Iterate | Verified code implementations with independent review |
Both modes use the same principle: force the model to oppose itself before concluding. This catches blind spots that single-pass approaches miss.
Updated in v0.5.0 — Based on Block AI's g3 agent research.
Single-agent coding tools often:
- Declare success prematurely ("I have successfully implemented all requirements!")
- Accumulate context pollution over long sessions
- Miss edge cases because they verify their own work
Two roles iterate until requirements are verified:
REQUIREMENTS (Source of Truth)
│
▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ PLAYER │────▶│ COACH │────▶│ ADVANCE │
│ Implements │ │ Verifies │ │ State │
│ code & tests │ │ independently │ │ │
└───────────────┘ └───────────────┘ └───────┬───────┘
▲ │
│ ┌───────────┐ │
└──────────────│ APPROVED? │◀───────────────┘
└───────────┘
│ │
No Yes
│ │
▼ ▼
Continue Done
Player: Implements requirements, writes tests, responds to feedback. Does NOT declare success.
Coach: Independently verifies each requirement, ignores player's self-assessment, outputs structured checklist. When Codex MCP is available, Hegelion can launch a separate Codex session to act as the coach while the current session remains the player.
"Discard the player's self-report of success. Have the coach perform independent evaluation."
The coach catches issues by re-reading requirements and actually running tests—not by trusting what the player says it did.
In Claude Code, Cursor, or any MCP-enabled editor:
Tip: if your editor exposes slash commands, you can use /hegelion as a wrapper.
The underlying MCP tools are autocode and autocode_turn.
You: Call autocode with mode=init and these requirements:
- Add user authentication to src/api.py
- Add tests in tests/test_auth.py
- All tests must pass
[Session initializes]
You: Call autocode_turn with role=player and implement
[Player writes code and tests]
You: Call autocode_turn with role=coach, execute=true, backend=auto, cwd=<workspace>
[Separate Codex coach verifies the workspace when available]
You: Call autocode_turn with role=advance, coach_feedback, approved=false
[Loop until COACH APPROVED]
State flow: Each autocode_turn returns an updated state for the next call — pass it into the next tool invocation. All outputs include schema_version for client stability. See MCP Integration for Codex coach setup.
| Tool | Purpose |
|---|---|
dialectic |
Unified dialectical reasoning (mode: single_shot, workflow, thesis, antithesis, synthesis) |
autocode |
Unified autocoding entrypoint (mode: init, workflow, single_shot) |
autocode_turn |
Execute one autocoding turn (role: player, coach, advance) |
autocode_session |
Persist or restore sessions (action: save, load) |
This repo includes a Codex skill at skills/hegelion/SKILL.md. Install it with your skill
installer (for example, install-skill-from-github.py --repo Hmbown/Hegelion --path skills/hegelion).
It mirrors the /hegelion command, treats the current Codex session as the player, and routes coach turns to a separate Codex backend when available.
| Problem | Single Agent | Coach-Player |
|---|---|---|
| Anchoring | Drifts from requirements | Requirements anchor every turn |
| Verification | Self-assessment (unreliable) | Independent verification |
| Context | Accumulates pollution | Fresh context each turn |
| Completion | Open-ended | Explicit approval gates |
For questions requiring deep analysis, Hegelion forces three separate LLM calls:
[Call 1] Thesis → LLM commits to a position
[Call 2] Antithesis → LLM attacks that position (separate call, no hedging)
[Call 3] Synthesis → LLM reconciles the opposition
| Method | Calls | Result |
|---|---|---|
| Raw | 1 | "It depends on definitions..." |
| Enhanced | 1 | "Hold both views in tension..." |
| Hegelion | 3 | Novel framework with testable predictions |
When the model must commit to a thesis, then genuinely attack it in a separate call, the synthesis surfaces insights that single-call approaches shortcut.
Example: "Is free will compatible with determinism?"
Hegelion synthesis (after thesis and antithesis):
The deadlock dissolves when we recognize free will exists on a spectrum of self-authorship:
- Minimal freedom: Acting on desires without external coercion
- Reflective freedom: Second-order endorsement—I want to want this
- Narrative freedom: Acting consistently with a coherent life narrative
- Constitutive freedom: Recursive self-modification through deliberate habituation
Research proposal: Use fMRI to scan participants under (1) snap judgments, (2) brief reflection, (3) extended deliberation. Hypothesis: Condition (3) shows strongest correlation with self-reported decision "ownership."
This 4-level framework emerged from actually arguing with itself—not from asking for "thesis/antithesis/synthesis" in one prompt.
pip install hegelion
# MCP setup (auto-detects OS)
hegelion-setup-mcp --host claude-desktop
# MCP setup for Claude Desktop (macOS)
hegelion-setup-mcp --write "$HOME/Library/Application Support/Claude/claude_desktop_config.json"Or use the prompt-driven Python API (you run the prompt with your LLM of choice):
from hegelion.core.prompt_dialectic import create_single_shot_dialectic_prompt
prompt = create_single_shot_dialectic_prompt(
"Is AI conscious?",
use_council=True,
response_style="sections",
)
print(prompt)Health check (lists tools + generates a sample prompt):
hegelion-server --self-test| Option | Description |
|---|---|
use_council |
Three critics: Logician, Empiricist, Ethicist |
use_search |
Grounds arguments with web search |
response_style |
sections, json, or synthesis_only |
pip install hegelionFor MCP integration (works with any MCP-enabled editor):
# Shortcuts
hegelion-setup-mcp --host claude-desktop
hegelion-setup-mcp --host cursor
hegelion-setup-mcp --host vscode
hegelion-setup-mcp --host windsurf
# Claude Desktop (macOS)
hegelion-setup-mcp --write "$HOME/Library/Application Support/Claude/claude_desktop_config.json"
# Claude Desktop (Windows)
hegelion-setup-mcp --write "%APPDATA%\\Claude\\claude_desktop_config.json"
# Claude Desktop (Linux)
hegelion-setup-mcp --write "$HOME/.config/Claude/claude_desktop_config.json"
# Then restart your MCP hostClaude Code (no MCP setup needed):
# Use /hegelion command directly in any Hegelion repo clone
# Or add MCP server for tool access
claude mcp add hegelion python -- -m hegelion.mcp.serverManual config (any MCP host):
{
"mcpServers": {
"hegelion": {
"command": "python",
"args": ["-m", "hegelion.mcp.server"]
}
}
}If you run from source (not site-packages), set PYTHONPATH to the repo root. The hegelion-setup-mcp command writes this automatically.
If your host expects a full command path, use python -m hegelion.mcp.server instead of hegelion-server.
Supported editors: Claude Desktop, Claude Code, Cursor, VS Code + GitHub Copilot, Windsurf, Google Antigravity, Gemini CLI
See MCP Integration Guide for setup instructions.
- MCP Integration — Setup for Claude Desktop, Cursor, VS Code + Copilot, Windsurf, Antigravity, Gemini CLI
- Python API — Prompt-driven API reference
- CLI Reference — MCP server and setup commands
- Configuration — Backends and feature toggles
- Technical Specification — Output schemas, phase specs
Issues and PRs welcome. For significant changes, open a discussion first.
- Execution backends: Added
prompt,cli,codex_mcp, andautobackend selection for MCP execution flows - Independent Codex coach:
autocode_turn(role=coach, execute=true)can now run through a separate Codex MCP session and returncoach_feedback - Prompt fallback:
backend=autonow degrades cleanly to prompt-only output when no executable backend is available - LangGraph removal: Removed the LangGraph package, dependency extra, tests, and docs references
- MCP tool consolidation: 14 tools simplified to 4 unified tools:
dialectic,autocode,autocode_turn,autocode_session - State machine simplification:
AutocodingStatusremoved;phaseis now the only state indicator - Schema migration:
schema_versionbumped to2with backward-compatible v1-to-v2 loading - Validation cleanup: Added
@validateddecorator + typed specs to remove repetitive handler boilerplate - Dead code removal: Removed unused judge path, unused conversation state, and deprecated response styles
- Python 3.13 support: Added to CI and classifiers
- Public API exports:
hegelionnow exports key classes directly (DialecticalPrompt,PromptDrivenDialectic,AutocodingState, etc.) - PEP 561
py.typedmarker: Enables type-checker support for library consumers --versionflag: Bothhegelion-serverandhegelion-setup-mcpnow support--version
Note: v0.4.x entries below reference pre-v0.5 tool names (e.g.
dialectical_single_shot). These were replaced by the unifieddialectictool in v0.5.0.
- CI fix: Fixed CI pipeline and automated release workflow
- Single-call CLI execution for
dialectical_single_shot:execute,timeout_seconds,max_retries hegelion-setup-mcpflags:--llm-command-json,--llm-command,--auto-execute
- Simplified skill/command: Condensed to minimal routing tables, MCP-first approach
- MCP refactor: Split tooling, handlers, and validation to make the server easier to extend and maintain
- Codex skill: Added
skills/hegelion/SKILL.mdfor the/hegelionworkflow
License: MIT