Skip to content

Commit a4ba07c

Browse files
authored
Update README and metadata for current feature set (#172)
1 parent ae1a80d commit a4ba07c

2 files changed

Lines changed: 58 additions & 18 deletions

File tree

README.md

Lines changed: 50 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
</p>
44

55
<p align="center">
6-
<strong>Plug into any codebase. Generate specs. Ship features while you sleep.</strong>
6+
<strong>Plug into any codebase. Generate specs. Run autonomous feature loops with Claude Code or Codex.</strong>
77
</p>
88

99
<p align="center">
@@ -32,9 +32,9 @@
3232

3333
## What is Wiggum?
3434

35-
Wiggum is an **AI agent** that plugs into any codebase and makes it ready for autonomous feature development — no configuration, no boilerplate.
35+
Wiggum is an **AI agent CLI** that plugs into any codebase and prepares it for autonomous feature development.
3636

37-
It works in two phases. First, **Wiggum itself is the agent**: it scans your project, detects your stack, and runs an AI-guided interview to produce detailed specs, prompts, and scripts — all tailored to your codebase. Then it delegates the actual coding to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or any CLI-based coding agent, running an autonomous **implement → test → fix** loop until the feature ships.
37+
It works in two phases. First, **Wiggum itself is the agent**: it scans your project, detects your stack, and runs an AI-guided interview to produce detailed specs, prompts, and scripts tailored to your codebase. Then it delegates coding loops to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [Codex CLI](https://github.com/openai/codex), running **implement → test → fix** cycles until completion.
3838

3939
Plug & play. Point it at a repo. It figures out the rest.
4040

@@ -48,7 +48,7 @@ Plug & play. Point it at a repo. It figures out the rest.
4848
│ plug&play prompts guides until done │
4949
│ │ │ │
5050
└────────────────────────────┘ └────────────────────┘
51-
runs in your terminal Claude Code / any agent
51+
runs in your terminal Claude Code / Codex CLI
5252
```
5353

5454
---
@@ -65,6 +65,7 @@ Then, in your project:
6565
wiggum init # Scan project, configure AI provider
6666
wiggum new user-auth # AI interview → feature spec
6767
wiggum run user-auth # Autonomous coding loop
68+
wiggum agent --dry-run # Preview backlog automation plan
6869
```
6970

7071
Or skip the global install:
@@ -81,14 +82,18 @@ npx wiggum-cli init
8182

8283
🎙️ **AI-Guided Interviews** — Generates detailed, project-aware feature specs through a structured 4-phase interview. No more blank-page problem.
8384

84-
🔁 **Autonomous Coding Loops** — Hands specs to Claude Code (or any agent) and runs implement → test → fix cycles with git worktree isolation.
85+
🔁 **Autonomous Coding Loops** — Hands specs to Claude Code or Codex CLI and runs implement → test → fix cycles with git worktree isolation.
8586

8687
**Spec Autocomplete** — AI pre-fills spec names from your codebase context when running `/run`.
8788

8889
📥 **Action Inbox** — Review AI decisions inline without breaking your flow. The loop pauses, you approve or redirect, it continues.
8990

9091
📊 **Run Summaries** — See exactly what changed and why after each loop completes, with activity feed and diff stats.
9192

93+
🧠 **Backlog Agent** — Run `wiggum agent` to execute prioritized GitHub backlog items with dependency-aware scheduling and review-mode controls.
94+
95+
🗂️ **Issue Intake** — Use `/issue` in TUI to browse GitHub issues and start specs directly from issue context.
96+
9297
📋 **Tailored Prompts** — Generates prompts, guides, and scripts specific to your stack. Not generic templates — actual context about *your* project.
9398

9499
🔌 **BYOK** — Bring your own API keys. Works with Anthropic, OpenAI, or OpenRouter. Keys stay local, never leave your machine.
@@ -105,12 +110,11 @@ npx wiggum-cli init
105110
wiggum init
106111
```
107112

108-
Wiggum reads your `package.json`, config files, source tree, and directory structure. A multi-agent AI system then analyzes the results:
113+
Wiggum reads your `package.json`, config files, source tree, and directory structure. It then runs a simplified analysis pipeline:
109114

110-
1. **Planning Orchestrator** — creates an analysis plan based on detected stack
111-
2. **Parallel Workers** — Context Enricher explores code while Tech Researchers gather best practices
112-
3. **Synthesis** — merges results, detects relevant MCP servers
113-
4. **Evaluator-Optimizer** — QA loop that validates and refines the output
115+
1. **Codebase Analyzer (unified agent)** — builds project context, commands, and implementation guidance from your actual codebase
116+
2. **MCP Detection** — maps detected stack to essential/recommended MCP server suggestions
117+
3. **Context Persistence** — saves enriched context and generated assets under `.ralph/`
114118

115119
Output: a `.ralph/` directory with configuration, prompts, guides, and scripts — all tuned to your project.
116120

@@ -135,7 +139,7 @@ An AI-guided interview walks you through:
135139
wiggum run payment-flow
136140
```
137141

138-
Wiggum hands the spec + prompts + project context to your coding agent and runs an autonomous loop:
142+
Wiggum hands the spec + prompts + project context to Claude Code or Codex CLI and runs an autonomous loop:
139143

140144
```
141145
implement → run tests → fix failures → repeat
@@ -159,7 +163,10 @@ $ wiggum
159163
| `/new <feature>` | `/n` | AI interview → feature spec |
160164
| `/run <feature>` | `/r` | Run autonomous coding loop |
161165
| `/monitor <feature>` | `/m` | Monitor a running feature |
166+
| `/issue [query]` || Browse GitHub issues and start a spec |
167+
| `/agent [flags]` | `/a` | Run autonomous backlog executor |
162168
| `/sync` | `/s` | Re-scan project, update context |
169+
| `/config [...]` | `/cfg` | Manage API keys and loop settings |
163170
| `/help` | `/h` | Show commands |
164171
| `/exit` | `/q` | Exit |
165172

@@ -214,9 +221,12 @@ Create a feature specification via AI-powered interview.
214221

215222
| Flag | Description |
216223
|------|-------------|
217-
| `--ai` | Use AI interview (default in TUI mode) |
218224
| `--provider <name>` | AI provider for spec generation |
219225
| `--model <model>` | Model to use |
226+
| `--issue <number\|url>` | Add GitHub issue as context (repeatable) |
227+
| `--context <url\|path>` | Add URL/file context (repeatable) |
228+
| `--auto` | Headless mode (skip TUI) |
229+
| `--goals <description>` | Feature goals for `--auto` mode |
220230
| `-e, --edit` | Open in editor after creation |
221231
| `-f, --force` | Overwrite existing spec |
222232

@@ -244,6 +254,13 @@ For loop models:
244254
- Claude CLI phases use `defaultModel` / `planningModel` (defaults: `sonnet` / `opus`).
245255
- Codex CLI phases default to `gpt-5.3-codex` across all phases.
246256

257+
<details>
258+
<summary><code>wiggum sync</code></summary>
259+
260+
Re-scan project and refresh saved context (`.ralph/.context.json`) using current provider/model settings.
261+
262+
</details>
263+
247264
<details>
248265
<summary><code>wiggum monitor &lt;feature&gt; [options]</code></summary>
249266

@@ -253,6 +270,26 @@ Track feature development progress in real-time.
253270
|------|-------------|
254271
| `--interval <seconds>` | Refresh interval (default: 5) |
255272
| `--bash` | Use bash monitor script |
273+
| `--stream` | Force headless streaming monitor output |
274+
275+
</details>
276+
277+
<details>
278+
<summary><code>wiggum agent [options]</code></summary>
279+
280+
Run the autonomous backlog executor (GitHub issue queue + dependency-aware scheduling).
281+
282+
| Flag | Description |
283+
|------|-------------|
284+
| `--model <model>` | Model override (defaults from `ralph.config.cjs`) |
285+
| `--max-items <n>` | Max issues to process before stopping |
286+
| `--max-steps <n>` | Max agent steps before stopping |
287+
| `--labels <l1,l2>` | Only process issues matching these labels |
288+
| `--issues <n1,n2,...>` | Only process specific issue numbers |
289+
| `--review-mode <mode>` | `manual`, `auto`, or `merge` |
290+
| `--dry-run` | Plan actions without executing |
291+
| `--stream` | Stream output instead of waiting for final response |
292+
| `--diagnose-gh` | Run GitHub connectivity diagnostics for agent flows |
256293

257294
</details>
258295

@@ -309,6 +346,7 @@ Keys are stored in `.ralph/.env.local` and never leave your machine.
309346

310347
- **Node.js** >= 18.0.0
311348
- **Git** (for worktree features)
349+
- **GitHub CLI (`gh`)** for `/issue` browsing and backlog agent operations
312350
- An AI provider API key (Anthropic, OpenAI, or OpenRouter)
313351
- A supported coding CLI for loop execution: [Claude Code](https://docs.anthropic.com/en/docs/claude-code) and/or [Codex CLI](https://github.com/openai/codex)
314352

package.json

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"name": "wiggum-cli",
33
"version": "0.17.3",
4-
"description": "AI-powered feature development loop CLI",
4+
"description": "AI agent CLI for spec-driven feature loops with Claude Code and Codex",
55
"type": "module",
66
"main": "dist/index.js",
77
"files": [
@@ -29,15 +29,17 @@
2929
"cli",
3030
"ai-agent",
3131
"autonomous-coding",
32-
"ralph-loop",
3332
"spec-generation",
33+
"feature-specs",
34+
"feature-loop",
35+
"backlog-automation",
3436
"claude-code",
3537
"codex",
36-
"ai-coding",
37-
"feature-loop",
38-
"code-generation",
3938
"developer-tools",
40-
"tech-stack-detection"
39+
"terminal-ui",
40+
"tech-stack-detection",
41+
"ralph-loop",
42+
"typescript"
4143
],
4244
"repository": {
4345
"type": "git",

0 commit comments

Comments
 (0)