SpexFlow is a visual context/spec workflow tool built on React Flow. It helps you turn a concrete feature request into:
- curated repo context (via code search or manual selection), then
- a high-quality implementation plan/spec from an LLM, then
- a clean prompt you can paste into a "fresh-context" code agent (Codex / Claude Code / etc.).
It's optimized for "finish one well-defined feature in one shot" rather than "keep everything in your head".
A minimal workflow (instruction → code-search → context → LLM):
A larger canvas where you keep reusable context blocks and rerun only the parts that changed:
Two hops searching and planning
SpexFlow loads a local code repo and lets you run a small node-based workflow:
- Instruction → produce/compose plain text input
- Code Search Conductor → generate multiple complementary search queries (one per downstream Code Search node)
- Code Search (Relace fast agentic search) → returns
{ explanation, files } - Manual Import → select files/folders and produce the same
{ explanation, files }shape (no external search) - Context Converter → turns file ranges into line-numbered text context
- LLM → takes context + prompt and generates an output (spec/plan/etc.)
- Node.js 18+
- pnpm 9+
pnpm installpnpm dev- Web UI: open Vite dev server (printed in terminal, typically
http://localhost:5173) - Server health:
curl http://localhost:3001/api/health
Open Settings (top-right) and set:
- Code Search: Relace API key (get one here)
- LLM providers/models: add at least one model under a provider with an OpenAI-compatible endpoint
Use the default canvas, or build:
instruction → code-search → context-converter → llm
Then copy the LLM output and paste it into your coding agent.
- A canvas is a directed graph (nodes + edges).
- Each node has inputs (edges into it) and output (stored on the node).
- Node outputs are persisted locally in
data.jsonso you can reuse them and rerun only the stale pieces.
- Run: executes one node. If the node has any incoming edges, all predecessors must be
success. - Chain: executes the whole downstream subgraph from a node, respecting dependencies, and shows progress in Chain Manager.
- Locked: node cannot be dragged and won't be reset by Chain; useful for "stable cached context".
- Muted: node returns empty output immediately (no API calls); useful for temporarily disabling branches.
- Purpose: write your feature request / constraints / acceptance criteria.
- Input: optional predecessor text nodes.
- Output: a single composed string (predecessor text + your typed text).
- Purpose: generate multiple complementary search queries for several downstream Code Search nodes.
- Input: optional predecessor text nodes + its own query field.
- Output: JSON mapping
successor_node_id -> query. - Requirement: must have at least one direct successor
code-searchnode (it assigns queries by node id).
- Purpose: use Relace fast agentic search to find relevant code.
- Config:
repoPath: absolute path or relative to this project directoryquery: natural language querydebugMessages: dumps full raw tool conversation tologs/relace-search-runs/<runId>.json
- Output shape (shared with Manual Import):
explanation: stringfiles: Record<relPath, Array<[startLine,endLine]>>
- Purpose: hand-pick local files/folders as context (no external search).
- Config:
repoPathitems: selected files and folders (stored as relative paths; contents are never persisted)
- Folder behavior:
- Non-recursive: includes only direct child files (one level).
- Filters by a hardcoded "trusted extensions" allowlist (includes
.md) inserver/repoBrowser.ts.
- Run behavior:
- Validates every selected path at run time; if a file/folder no longer exists, the node fails loudly.
- Output: identical shape to
code-searchso downstream Context Converter can reuse the same path/range format.
- Purpose: turn
{ explanation, files }into a single line-numbered context string. - Input: one or more
code-search/manual-import/context-converterpredecessors. - Config:
fullFile(full files) vs ranges. - Behavior: merges and deduplicates overlapping/adjacent line ranges across all predecessors (per repo) before building context.
- UI: shows the merged file ranges in the sidebar ("Merged File Ranges").
- Output: a single string, joining multiple predecessor contexts with
---.
- Purpose: run a chat-completions style LLM call over the composed prompt.
- Input: optional predecessor text nodes.
- Config:
model(selected from Settings)systemPrompt(optional)query
- Output: a single string.
The app enforces a connection matrix (invalid edges are rejected). The current rules:
| Source (output) ↓ \ Target (input) → | instruction | code-search-conductor | manual-import | code-search | context-converter | llm |
|---|---|---|---|---|---|---|
| instruction | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ |
| code-search-conductor | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |
| manual-import | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ |
| code-search | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ |
| context-converter | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ |
| llm | ✅ | ✅ | ❌ | ✅ | ❌ | ✅ |
- Hand Mode: pan the canvas (
H, or holdSpacetemporarily) - Select Mode: select nodes, drag to box-select (
V) - Add nodes: Code Search / Manual Import / Search Conductor / Context / Instruction / LLM
- Reset Canvas: clears outputs of all unlocked nodes (must not have running nodes)
- Settings for the selected node (fields vary per node type)
- Actions:
- Run: run this node
- Chain: run everything downstream
- Reset: clear this node's output (unless locked)
- Output:
- Preview + "View All"
- Copy output to clipboard
- Drag-select multiple nodes → a small panel appears with Copy and Delete.
- Hotkeys:
Cmd/Ctrl+C: copy selected nodesCmd/Ctrl+V: paste
- You can keep multiple canvases as tabs.
- All tabs persist in
data.json.
Open Settings (top-right):
- Language: English / 中文
- LLM Providers:
- Add providers with
endpoint+apiKey(must be OpenAI-compatible chat-completions) - Add models (model id + display name)
- Add providers with
- Code Search:
- currently supports Relace
data.json: all canvases + outputs + settings (gitignored)- delete it to reset the app state
logs/relace-search.jsonl: appended run logs (gitignored)logs/relace-search-runs/<runId>.json: optional full message dumps whendebugMessagesis enabled
- Frontend: Vite + React (
src/) - Graph UI: React Flow (
@xyflow/react) - Backend: Express + TypeScript (
server/), runs viatsx watch - Proxy: Vite proxies
/apitohttp://localhost:3001(vite.config.ts)
- Code file auto merge/deduplication + visualization for context converter node output
- Export canvas to local file
- Custom LLM parameters (e.g. reasoning, temperature)
- Support local LLMs
- Token statistics
- Backup running history
- Explicit spec document management interface
If you hit Cannot find matching keyid from corepack, install pnpm directly (see AGENTS.md).

