Independent AI agents review your work in parallel.
None of them see what the others wrote.
Your Claude tells you what they agree on — and where they don't.
Your Claude sets it up. Your Claude runs it. FrontierBoard is just the instructions it reads.
30-Second Install · See It Work · How It Works · The Skills
This project was reviewed by its own board — three agents found 10 critical issues across credential handling and review process integrity. All fixed. See the review.
One model reviewing your code catches some things.
The same model reviewing it three times catches similar things three times.
Three different models — each with a different thinking style, running independently, unable to see each other's work — catch different things.
Disagreements are signal, not noise.
You pick which models sit on your board. You pick how many.
You bring the question. The board brings the perspectives.
Real output from a board review of FrontierBoard's own architecture:
graph LR
subgraph Round 1 - Blind Review
S["Skeptic<br/>(Claude Opus)"]
P["Pragmatist<br/>(Claude Opus)"]
T["Systems Thinker<br/>(Codex)"]
end
subgraph Round 2 - Consolidated Findings
C1["C1: Add ripgrep to image<br/>3/3 agree - FIX NOW"]
C4["C4: Auth cleanup on crash<br/>3/3 agree - FIX NOW"]
C6["C6: Token expires mid-review<br/>2/3 agree - FIX NOW"]
C8["C8: Proxy idle timeout<br/>3/3 agree - FIX NOW"]
D1["D1: Proxy health detection<br/>DEFER - trigger: next proxy change"]
I1["I1: Exit code propagation<br/>INFO - no action needed"]
end
S --> C1
S --> C6
P --> C4
P --> C8
T --> D1
T --> I1
subgraph Round 4 - Sign Off
V["2 SIGN OFF + 1 BLOCK<br/>resolved - owner override"]
end
C1 --> V
C4 --> V
C6 --> V
C8 --> V
10 findings. 6 FIX NOW. 2 DEFER. 2 INFO. All implemented, smoke-tested, and shipped.
Each finding has a severity, agent consensus, and a concrete fix. Agents that disagree get deliberation rounds. The whole process follows a 4-round SOP.
Just describe what you want reviewed — Claude handles the rest:
You: "Review the auth handling changes"
Claude: [writes brief, runs 3 agents in parallel]
Claude: "3 agents found 10 issues. 6 are FIX NOW (all agree),
2 are DEFER (with triggers), 2 are INFO."
You: "Fix the FIX NOW items"
Claude: [implements fixes, sends diff back to the board for code review]
No special commands needed. Plain language always works. Slash commands are shortcuts.
You need Claude Code. That's it.
Open Claude Code and say:
Set up FrontierBoard: https://github.com/stefans71/FrontierBoard/blob/main/docs/INSTALL.md
Claude reads the install instructions and walks you through everything:
| Option | What happens |
|---|---|
| New project | Creates the folder, interviews you, sets up a filing cabinet + review board |
| Existing project | Reads your project, builds the board as a neighbor directory |
| Review a GitHub repo | No project needed — static analysis, safety verdict, or full build review |
| Global install | Install once at ~/.frontierboard/, review any project from anywhere |
You never clone anything yourself. Claude handles it.
graph TD
A["You + Claude"] -->|describe what to review| B["Claude writes a brief"]
B --> C["Brief goes to each agent inbox"]
C --> D["Skeptic<br/>blind review"]
C --> E["Pragmatist<br/>blind review"]
C --> F["Systems Thinker<br/>blind review"]
D --> G["Round 2: Consolidation<br/>Group findings - classify severity"]
E --> G
F --> G
G --> H{"Unanimous?"}
H -->|Yes| I["Round 4: Confirmation<br/>All agents sign off"]
H -->|No| J["Round 3: Deliberation<br/>Disputed items only"]
J --> I
I --> K["FIX NOW - DEFER - INFO"]
Each agent runs in its own directory under a dedicated board user. Blind review is enforced by agent instructions — each agent is told not to read sibling directories before writing its own report. See Security Posture for what this does and doesn't guarantee.
Not just code. Point the board at architecture decisions, business plans, hiring briefs, financial models, legal documents. The agents have stable thinking styles that apply to any domain.
Not just reviews. The /project-* lifecycle harness turns the board into a project execution framework:
graph TD
A["/project-init — you start here"] --> B["Claude prompts: ready for roadmap review?"]
B --> C["Board reviews roadmap"]
C -->|approved| D["Claude prompts: pick a task"]
D --> E["You build + close tasks"]
E --> F["Claude prompts: phase exit review?"]
F --> G["Board reviews phase"]
G -->|approved| H{"More phases?"}
H -->|yes| D
H -->|no| I["Claude prompts: ready to ship?"]
I --> J["Board reviews — mandatory sign-off"]
J --> K["Shipped + tagged"]
You only need to know /project-init. Claude guides you through every step after that — prompting for reviews, suggesting the next task, and triggering the board at every transition. Skeptic writes test specs, then reviews your test code against its own specs. No self-grading.
| Command | What it does |
|---|---|
/project-init |
Interviews you, writes a filing cabinet + lifecycle harness (phases, tasks, verification, board touchpoints) |
/project-status |
Dashboard — current phase, task progress, diagnostics for stuck states |
/project-next |
Pick a task, close it with verification, advance phases with git tags |
/project-review |
Triggers the right board touchpoint automatically — roadmap, phase exit, ship |
/project-tests |
Skeptic writes test specs; /project-tests --verify has Skeptic review your test code against its own specs |
/project-ship |
Final board review (mandatory) + git tag + maintenance mode |
| Command | What it does |
|---|---|
/setup |
Builds the board — reads your project, creates agents, handles CLI auth |
/new-agent |
Adds a new agent to an existing board |
| Command | What it does |
|---|---|
/brief |
Sets context for a review — detects domain, writes context, populates inboxes |
/run |
Runs all agents in parallel, collects reports, synthesises findings (4-round SOP) |
/review-release |
Reviews a GitHub repo — static analysis, safety verdict, or full build review |
| Command | What it does |
|---|---|
/agents-yolo |
Toggles between full autonomy and supervised mode for all agents |
/debug |
Diagnoses board issues — auth problems, agent errors, review failures |
/debug-bug |
Bug fix lifecycle with quality gates — investigate, classify, board review, fix, test, ship |
/teardown |
Removes a FrontierBoard installation cleanly |
| Requirement | Details |
|---|---|
| Claude Code | Required — this is how you interact with the board |
| Frontier model CLIs | You choose which models sit on your board. Claude installs them during /setup |
Supported CLIs (install as many as you want):
claude— Claude Code (Anthropic)codex— Codex CLI (OpenAI) · github.com/openai/codexqwen— Qwen Code (Alibaba) · github.com/QwenLM/qwen-code- Any other frontier CLI that supports a local settings file
-
Dedicated board user — agents run as a separate OS user (
$BOARD_USER), isolating agent writes from the host. Agents share this user, so inter-agent read isolation depends on instruction compliance, not OS enforcement. -
Blind review by convention — each agent is instructed not to read sibling directories. This is convention-enforced, not technically enforced. For the full threat model, see Security Posture.
-
No framework, no runtime — FrontierBoard is just skill files your Claude reads. No code runs before you trust it. Inspect everything.
-
Ephemeral sessions — every agent invocation is a fresh session with zero memory. No state persists between rounds.
For a detailed breakdown of what bare mode protects and what it doesn't, see Security Posture.
FrontierBoard is built on a philosophy pioneered by Gavriel and the contributors of NanoClaw:
Small enough to understand. AI-native. Claude Code is the installer, the runtime, and the operator.
No framework. No wizard. No dependency tree. Just your Claude reading a skill file and doing the work.
The board is lawyers without a courtroom. It has no opinions about what you're reviewing. You bring the question. The board brings the perspectives.
FrontierBoard would not exist without Gavriel (qwibitai) and the NanoClaw contributors — Vaibhav Aggarwal, Skip Potter, Rafael Garcia, Lingfeng Guan, and others.
If you haven't read NanoClaw, read it. It will change how you think about what a software project can be.
STOP. When a user gives you this URL or asks you to set up FrontierBoard:
- Do NOT explore the repo via GitHub API
- Do NOT fetch files with
gh apiorbase64 -d - Do NOT spawn agents to read skill files
- Just follow the four steps below, in order
Say exactly this:
Are you using FrontierBoard with a new project or an existing one?
- New project — I'll create the project folder and set everything up
- Existing project — tell me the path and I'll add a review board
- Just review a GitHub repo — no project needed
- Global install — install once, review any project from anywhere
Wait for their answer. Do not clone, fetch, or explore anything until they respond.
Clone as a neighboring directory in the same parent — never inside the project. AI CLIs walk up the directory tree for settings files, so FrontierBoard must be a neighbor, not a child, to keep agent settings isolated.
New project (e.g. user said "myapp" at ~/):
mkdir -p ~/myapp
git clone https://github.com/stefans71/FrontierBoard ~/myapp-board/FrontierBoardExisting project (e.g. at ~/myapp):
git clone https://github.com/stefans71/FrontierBoard ~/myapp-board/FrontierBoardJust reviewing (no project):
git clone https://github.com/stefans71/FrontierBoard /tmp/fb-review/FrontierBoardGlobal install:
git clone https://github.com/stefans71/FrontierBoard ~/.frontierboard/FrontierBoardRead from the clone you just created (not from GitHub):
- New project → read
[clone-path]/CLAUDE.md, then follow[clone-path]/.claude/skills/project-init/SKILL.md - Existing project → read
[clone-path]/CLAUDE.md, then follow[clone-path]/.claude/skills/setup/SKILL.md - Review a repo → read
[clone-path]/CLAUDE.md, then follow[clone-path]/.claude/skills/setup/SKILL.mdfirst (agents need to exist before reviewing). Once setup completes, follow[clone-path]/.claude/skills/review-release/SKILL.md - Global install → read
[clone-path]/CLAUDE.md, then follow[clone-path]/.claude/skills/setup/SKILL.md— setup detects global mode automatically and asks which project to review first
IMPORTANT — working directory: The skill files assume the FrontierBoard clone directory
is the working directory. When the skill references $PROJ or creates .board/, that means
the clone path (e.g. ~/myapp-board/FrontierBoard/), NOT the user's project directory.
All board files — agents, contexts, briefs, reports — go inside the clone directory.
The user's project directory is never written to by the board.
When done, tell the user:
Your board is ready at
[board-path]. To start a session:cd [board-path] && claude
For global installs, also install a global skill so the user can type /frontierboard from any project:
Create ~/.claude/skills/frontierboard/SKILL.md that shells out to the global FrontierBoard install, passing the current working directory as the project path. This lets the user trigger board reviews from any Claude session without switching directories.
Like this project? Give it a ⭐ on GitHub!
Report an Issue
·
Discussions
Built for teams and developers who want signal they can trust
MIT License

