diff --git a/docs/plans/2026-03-27-codebase-simplification.md b/docs/plans/2026-03-27-codebase-simplification.md new file mode 100644 index 0000000..326f5d3 --- /dev/null +++ b/docs/plans/2026-03-27-codebase-simplification.md @@ -0,0 +1,301 @@ +# Codebase Simplification Plan + +**Date**: 2026-03-27 +**Goal**: Clean up the entire Dobby codebase for clarity, consistency, and maintainability while preserving all functionality. + +## Overview + +The codebase is ~24K lines across 173 files. It's generally well-structured but has accumulated duplication, oversized files, inconsistent patterns, and minor cleanup debt. This plan organizes cleanup into 6 phases, ordered by impact and safety (easy wins first, structural refactors last). + +--- + +## Codebase Analysis + +### Key Problem Areas + +| Area | Issue | Files | Impact | +|------|-------|-------|--------| +| `pipeline.ts` | 1,536 lines, monolithic | `src/lib/issues/pipeline.ts` | Hard to test/maintain | +| Timeline builders | Two functions with 3 structural differences | `src/lib/claude/session-detail-reader.ts` | 200+ lines of near-duplication | +| Notification upserts | Same upsert pattern 4x in API routes | telegram, slack API routes | ~200 lines repeated | +| Shared CSS classes | `inputClasses` duplicated exactly | `agent-form.tsx`, `env-vars-editor.tsx` | Divergence risk | +| `formatTokens` | Identical function in 2 session pages | `sessions/page.tsx`, `sessions/[id]/page.tsx` | Copy-paste | +| Metadata extraction | Same loop appears 2x | `session-detail-reader.ts` | Copy-paste | +| Settings page | 990 lines, multiple concerns | `src/app/(app)/settings/page.tsx` | Hard to maintain | +| Large pages | 5 pages over 700 lines each | issues/, agents/, settings pages | Readability | +| MCP PATCH validation | No input validation on update | `mcp/servers/[id]/route.ts` | Security gap | + +### Architecture Notes + +- **Agent core** (`src/lib/agent/core.ts`) is an agentic loop: gather tools → call LLM → execute tool_calls → loop +- **Claude session module** (`src/lib/claude/`) reads JSONL session files and stores summaries in DB — two readers share some logic +- **Issues pipeline** (`src/lib/issues/pipeline.ts`) orchestrates GitHub issue processing through 8 phases (plan → review → fix → implement → code review → code fix → PR → notify) — biggest single file +- **API routes** use a consistent `withErrorHandler` wrapper but have inconsistent validation depth +- **MCP client** manages subprocess-based tool servers with a 5-min failure cache + +### Critical Files to Reference + +- `src/lib/claude/session-detail-reader.ts` (447 lines) — `buildTimeline()` vs `buildSubAgentTimeline()` have 3 structural differences +- `src/lib/claude/session-utils.ts` — canonical `TokenUsage` definition (only place it's defined) +- `src/lib/issues/pipeline.ts` (1,536 lines) — the main refactoring target, exports `runIssuePipeline` and `buildWorktreePath` +- `src/app/api/agents/[agentId]/telegram/route.ts` — canonical upsert pattern duplicated across routes +- `src/app/(app)/sessions/page.tsx` and `sessions/[id]/page.tsx` — both define `formatTokens()` + +--- + +## Phase 1: Dead Code & Unused Exports + +**Risk**: Low — removing unused code can't break anything if verified unused. + +### Steps + +1. **Audit all exports across `src/lib/`** — For each exported function/type, grep for imports. Any export with zero external references should be either un-exported (made module-private) or removed entirely. Do NOT assume anything is dead without verifying via grep first. + +2. **Run a comprehensive unused-export scan** — Use `bun run tsc --noEmit` combined with grep to identify truly orphaned exports. Only remove code confirmed to have zero importers. + +### Verification +- `bun run tsc --noEmit` must pass after each deletion +- `bun test` must pass — all 32 existing test files must remain green +- Grep each name before removing to confirm zero external references + +--- + +## Phase 2: Type & Constant Deduplication + +**Risk**: Low — extracting shared constants is mechanical. + +### Steps + +1. **Extract shared CSS class constants** — create `src/components/shared/form-classes.ts`: + ```typescript + export const inputClasses = "w-full border border-border bg-background px-3 py-2 text-[15px] font-mono text-foreground placeholder:text-muted/40 outline-none transition-all focus:border-accent input-focus"; + export const labelClasses = "block text-[13px] font-medium text-foreground/70 mb-1.5"; + ``` + Update `src/components/agents/agent-form.tsx` and `src/components/agents/env-vars-editor.tsx` to import from there. + +2. **Add shared `formatTokens()` utility** — add to the existing `src/lib/utils/format.ts` (which already contains `formatDuration()`). Also update the existing test file `src/lib/utils/__tests__/format.test.ts` with tests for the new `formatTokens()` function. Update: + - `src/app/(app)/sessions/page.tsx` + - `src/app/(app)/sessions/[id]/page.tsx` + +3. **Move `DENIED_ENV_KEYS`** from `src/lib/runner/agent-memory.ts` to `src/lib/validations/constants.ts` to break the cross-layer import from `src/lib/validations/agent.ts` into the runner module. Co-locating with the validations layer avoids creating a generic top-level `constants.ts` dumping ground. Update both `agent-memory.ts` and `agent.ts` to import from the new location. + +### Verification +- `bun run tsc --noEmit` must pass +- `bun test` must pass +- `bun run dev` — spot-check the UI renders correctly + +--- + +## Phase 3: Function-Level Deduplication in `src/lib/claude/` + +**Risk**: Medium — these are data-processing functions, bugs would show as incorrect session displays. + +### Steps + +1. **Write regression tests for timeline builders before merging** — Verify test coverage for `buildTimeline` and `buildSubAgentTimeline` in `src/lib/claude/__tests__/`. Currently, no tests cover these functions. Before any refactoring, write regression tests that capture the current output of both functions (including edge cases for all 3 structural differences). This ensures the merge doesn't silently break session display. + +2. **Merge `buildTimeline()` and `buildSubAgentTimeline()`** in `session-detail-reader.ts`: + - These share ~80% of their logic but have **3 structural differences**: + 1. Sidechain filtering (`buildTimeline` filters out sidechains, `buildSubAgentTimeline` does not) + 2. User message type filtering (`buildTimeline` uses `external` only, `buildSubAgentTimeline` uses `external || internal`) + 3. Sub-agent launch deduplication (present in `buildTimeline`, absent in `buildSubAgentTimeline`) + - **Action**: Parameterize with options: `buildTimeline(entries, { filterSidechain?: boolean, includeInternalMessages?: boolean, trackSubAgentLaunches?: boolean })` + - Default values should match current `buildTimeline` behavior so existing callers don't change + - `buildSubAgentTimeline` callers switch to `buildTimeline(entries, { includeInternalMessages: true })` + +3. **Extract shared metadata extraction helper** — `session-detail-reader.ts` has the same metadata-extraction loop (slug, model, gitBranch, cwd) appearing **2 times** (lines ~260-270 and ~331-341). Extract to `extractSessionMetadata(entries)` helper in `src/lib/claude/session-utils.ts`. + +### Verification +- `bun run tsc --noEmit` +- Run claude session tests: `bun test src/lib/claude/` +- Verify the new timeline regression tests pass after the merge +- Manual: open the sessions page and verify timeline renders correctly + +--- + +## Phase 4: API Route Cleanup + +**Risk**: Medium — API changes could break the frontend. + +### Steps + +1. **Add unique constraint on `notificationConfigs.channel`** — the `channel` column in `src/lib/db/schema.ts` currently has no unique constraint. Before building the upsert helper, a migration is required: + - Verify data uniqueness: query `SELECT channel, COUNT(*) FROM notification_configs GROUP BY channel HAVING COUNT(*) > 1` to confirm no duplicates exist. Channel values use composite keys like `telegram-agent:{agentId}` and `slack-issues`, so they should be unique in practice. + - Add `.unique()` to the `channel` column in `src/lib/db/schema.ts` + - Run `bun run db:generate` to create a migration file + - The migration will be auto-applied on next startup + + **This is a prerequisite for Step 2** — the `ON CONFLICT(channel)` clause requires a unique constraint to function. + +2. **Extract notification config upsert helper** — the select-then-insert/update upsert pattern repeats in 4+ routes: + - `src/app/api/agents/[agentId]/telegram/route.ts` + - `src/app/api/issues/telegram/route.ts` + - `src/app/api/issues/slack/route.ts` + - `src/app/api/notifications/telegram/bots/copy/route.ts` + + **Action**: Create `src/lib/db/notification-config.ts` with: + ```typescript + export async function upsertNotificationConfig(channel: string, config: Record) { + // Use SQLite's INSERT ... ON CONFLICT(channel) DO UPDATE SET ... + // This requires the unique constraint added in Step 1. + // The current select-then-branch pattern is vulnerable to TOCTOU race conditions + // where concurrent requests both find no row and both try to insert. + } + ``` + The shared helper **must** use `INSERT ... ON CONFLICT(channel) DO UPDATE SET ...` rather than reproducing the existing select-then-branch pattern. Replace all 4 inline upsert blocks. + +3. **Fix missing validation in MCP server PATCH** — `src/app/api/mcp/servers/[id]/route.ts` accepts arbitrary PATCH bodies with no validation and passes raw `body` directly to `.set(body)`. Add a Zod schema that allowlists only the mutable fields from the `mcpServers` schema: + - **Allowed**: `name`, `command`, `args`, `env`, `enabled` + - **Excluded**: `id`, `createdAt` (immutable fields must not be settable via PATCH) + + ```typescript + const mcpServerUpdateSchema = z.object({ + name: z.string().optional(), + command: z.string().optional(), + args: z.array(z.string()).optional(), + env: z.record(z.string()).optional(), + enabled: z.boolean().optional(), + }); + ``` + +### Verification +- `bun run tsc --noEmit` +- `bun test` must pass +- `bun run dev` — test the telegram config save, MCP server edit, and issue config pages +- Verify the migration applies cleanly: restart dev server and check no errors + +--- + +## Phase 5: Large Component Decomposition + +**Risk**: Medium — UI changes are visible but straightforward to verify. + +### Steps + +1. **Audit shared state and component boundaries before splitting** — Before extracting any sections, audit each target component for: + - Shared state (e.g., global loading/saving flags, form state shared across sections) + - Event handlers passed between sections + - Server vs. client component boundaries (the settings page is `"use client"`, so all extracted sections will also be client components) + - Document the prop interfaces each extracted section will need + +2. **Split `settings/page.tsx` (990 lines)** into: + - `src/components/settings/mcp-servers-section.tsx` — MCP server list/add/edit/delete + - `src/components/settings/env-vars-section.tsx` — environment variable management + - `src/components/settings/session-retention-section.tsx` — retention config + - `src/app/(app)/settings/page.tsx` — thin page that composes the sections + - All extracted sections are client components (parent is `"use client"`). Pass shared state (loading flags, refresh callbacks) as props. + +3. **Split `projects/[id]/agents/[agentId]/page.tsx` (925 lines)** into: + - `src/components/agents/agent-detail-header.tsx` — name, status, edit toggle + - `src/components/agents/agent-runs-list.tsx` — run history table + - `src/components/agents/telegram-config-section.tsx` — telegram setup/test UI + - Keep the page as a composer. Audit server/client boundary before splitting. + +4. **Split `claude-panel.tsx` (369 lines)**: + - Extract streaming chat logic into `useClaudeChat()` custom hook + - Extract message rendering into a subcomponent + - Target: page component < 200 lines + +### Verification +- `bun run tsc --noEmit` +- `bun test` must pass +- `bun run dev` — visually verify settings page, agent detail page, and issues pages +- Side-by-side visual comparison before and after to catch layout regressions + +--- + +## Phase 6: `pipeline.ts` Refactor (1,536 lines) + +**Risk**: Higher — this is the core issue-processing engine. Must be done carefully. **Recommend as a separate PR.** + +### Steps + +1. **Map actual phase boundaries** — the pipeline has **8 numbered phases** (from `PHASE_STATUS_MAP` in `types.ts`): + - Phase 0: `pending` — Initial state + - Phase 1: `planning` — Create implementation plan via LLM + - Phase 2: `reviewing_plan_1` — Adversarial plan review (runs in parallel with Phase 3) + - Phase 3: `reviewing_plan_2` — Completeness plan review (runs in parallel with Phase 2) + - Phase 4: `implementing` — Code implementation via Claude session + - Phase 5-6: `reviewing_code_1` / `reviewing_code_2` — **3 parallel specialist reviewers** (Bugs & Logic, Security & Edge Cases, Design & Performance) dispatched via `Promise.allSettled()`. The PHASE_STATUS_MAP has only 2 status entries for code review, but the implementation runs 3 specialists tracked as sessions `5a`, `5b`, `5c`. + - Phase 7: `creating_pr` — PR creation and git operations + + Plus iterative fix loops (plan fix up to 5 iterations, code fix up to 3 iterations) and notification at the end. + +2. **Extract into phase modules** under `src/lib/issues/pipeline/`: + - `orchestrator.ts` — main `runIssuePipeline()` function that calls phases in order, manages session state + - `planning.ts` — plan generation phase (Phase 1) + - `plan-review.ts` — adversarial + completeness review, run in parallel (Phases 2-3) + - `plan-fix.ts` — iterative plan fix loop with convergence detection (up to 5 iterations) + - `implementation.ts` — Claude session execution (Phase 4) + - `code-review.ts` — 3 parallel code review specialists (Bugs & Logic, Security & Edge Cases, Design & Performance) with read-only enforcement (Phases 5-6) + - `code-fix.ts` — code fix loop with convergence detection (up to 3 iterations) + - `pr.ts` — PR creation and git operations (Phase 7) + - `notifications.ts` — Telegram/Slack message formatting and sending + +3. **Handle cross-cutting concerns** — the pipeline has several concerns that span phases: + - **`isCancelled()` polling** — checked at ~10 points throughout the pipeline. Must be passed to or accessible from each phase module. + - **`getUserAnswers()` for Q&A** — used in plan fix and code fix loops for interactive user input. Pass as a dependency to fix-loop modules. + - **Session resumption** (`resumeSessionId`) — used to resume Claude sessions across iterations. The orchestrator tracks session IDs and passes them to relevant phases. + + These should either live in the orchestrator (preferred, since they manage shared state) or be passed as a context/dependencies object to phase modules. + +4. **Keep the public API identical** — `src/lib/issues/pipeline.ts` becomes a thin re-export: + ```typescript + export { runIssuePipeline } from "./pipeline/orchestrator"; + export { buildWorktreePath } from "./pipeline/orchestrator"; + ``` + This means no callers need to change. + +5. **Move shared pipeline types** to `src/lib/issues/types.ts` (many are likely already there). + +### Verification +- `bun run tsc --noEmit` +- Run pipeline tests: `bun test src/lib/issues/` +- `bun test` — all 32 test files must pass +- Manual: trigger an issue pipeline run and verify it completes + +--- + +## Dependencies + +No new runtime dependencies needed. All changes use existing libraries (drizzle-orm, zod, react). Phase 4 Step 1 requires a Drizzle schema migration (`bun run db:generate`) for the unique constraint on `notificationConfigs.channel`. + +## Testing Strategy + +1. **Type-check gate**: `bun run tsc --noEmit` after every phase +2. **Automated test gate**: `bun test` after every phase — all 32 existing test files must pass. Key test locations: + - `src/lib/claude/__tests__/` — claude session tests + - `src/lib/issues/__tests__/pipeline.test.ts` — pipeline tests + - `src/lib/runner/` — runner tests + - `src/lib/validations/` — validation tests + - `src/lib/utils/__tests__/format.test.ts` — format utility tests +3. **Dev server smoke test**: `bun run dev` and verify key pages render +4. **Manual verification** for UI-facing changes (phases 4-5) +5. **Pipeline integration test** (phase 6): Run a test issue through the pipeline after refactoring + +## Execution Order & Estimates + +| Phase | Description | Files Changed | Risk | +|-------|-------------|---------------|------| +| 1 | Dead code audit & removal | 2-5 | Low | +| 2 | Constant/utility dedup | 8-10 | Low | +| 3 | Claude module dedup | 4-6 (includes new tests) | Medium | +| 4 | API route cleanup (includes migration) | 7-9 | Medium | +| 5 | Component decomposition | 8-12 | Medium | +| 6 | Pipeline refactor (separate PR) | 10-12 (new) + 1 (rewrite) | Higher | + +**Total**: ~39-54 files touched across all phases. + +## Risks & Mitigations + +- **Regression risk**: Mitigated by running type-checker, `bun test`, and dev server after each phase. Commit after each phase so rollback is easy. +- **Pipeline breakage (Phase 6)**: This is the highest-risk phase. Keep the public API (`runIssuePipeline`, `buildWorktreePath`) identical. Do as a separate PR. +- **UI visual regressions (Phase 5)**: Component extraction shouldn't change rendered output. Side-by-side visual comparison recommended. Audit shared state and prop interfaces before splitting. +- **Import path breaks**: Using `@/` aliases consistently avoids relative-path confusion. +- **Notification race condition (Phase 4)**: Current upsert pattern has TOCTOU vulnerability — the shared helper must use `ON CONFLICT` to fix this. Requires adding a unique constraint on `notificationConfigs.channel` first (via Drizzle migration). +- **Schema migration (Phase 4)**: Adding `.unique()` to `channel` will fail if duplicate rows exist. Verify data uniqueness before applying the migration. +- **Timeline merge regression (Phase 3)**: No existing tests cover `buildTimeline`/`buildSubAgentTimeline`. Regression tests must be written before the merge to catch silent breakage. + +--- + +VERDICT: READY diff --git a/drizzle/meta/_journal.json b/drizzle/meta/_journal.json index f3795c7..b278a2f 100644 --- a/drizzle/meta/_journal.json +++ b/drizzle/meta/_journal.json @@ -94,4 +94,4 @@ "breakpoints": true } ] -} +} \ No newline at end of file diff --git a/src/app/(app)/issues/[id]/page.tsx b/src/app/(app)/issues/[id]/page.tsx index e553a5e..2c20b88 100644 --- a/src/app/(app)/issues/[id]/page.tsx +++ b/src/app/(app)/issues/[id]/page.tsx @@ -3,6 +3,7 @@ import { useEffect, useState, useCallback, use } from "react"; import Link from "next/link"; import { useRouter } from "next/navigation"; +import { SlackIcon, TelegramIcon } from "@/components/shared/source-icons"; import ReactMarkdown from "react-markdown"; import remarkGfm from "remark-gfm"; import { @@ -31,26 +32,6 @@ function encodeProjectDir(fsPath: string): string { return fsPath.replace(/[/.]/g, "-"); } -function SlackIcon({ className }: { className?: string }) { - return ( - - - - - - - ); -} - -function TelegramIcon({ className }: { className?: string }) { - return ( - - - - - ); -} - interface IssueDetail { id: string; repositoryId: string; diff --git a/src/app/(app)/issues/page.tsx b/src/app/(app)/issues/page.tsx index 1f72bc8..6731e6d 100644 --- a/src/app/(app)/issues/page.tsx +++ b/src/app/(app)/issues/page.tsx @@ -2,6 +2,7 @@ import { useEffect, useState, useCallback, useRef } from "react"; import Link from "next/link"; +import { SlackIcon, TelegramIcon } from "@/components/shared/source-icons"; import { Settings2, Loader2, @@ -32,26 +33,6 @@ interface Issue { archivedAt: string | null; } -function SlackIcon({ className }: { className?: string }) { - return ( - - - - - - - ); -} - -function TelegramIcon({ className }: { className?: string }) { - return ( - - - - - ); -} - const STATUS_STYLES: Record = { pending: { bg: "bg-muted/10", text: "text-muted-foreground" }, planning: { bg: "bg-accent/10", text: "text-accent", dot: "bg-accent animate-pulse" }, diff --git a/src/app/(app)/sessions/[id]/page.tsx b/src/app/(app)/sessions/[id]/page.tsx index d4d642c..ee3a0dc 100644 --- a/src/app/(app)/sessions/[id]/page.tsx +++ b/src/app/(app)/sessions/[id]/page.tsx @@ -29,12 +29,7 @@ import type { TimelineEntry, SubAgentInfo, } from "@/lib/claude/types"; - -function formatTokens(n: number): string { - if (n >= 1_000_000) return `${(n / 1_000_000).toFixed(1)}M`; - if (n >= 1_000) return `${(n / 1_000).toFixed(1)}K`; - return String(n); -} +import { formatTokens } from "@/lib/utils/format"; function StatusBadge({ status }: { status: string }) { return ( diff --git a/src/app/(app)/sessions/page.tsx b/src/app/(app)/sessions/page.tsx index fed5892..076873b 100644 --- a/src/app/(app)/sessions/page.tsx +++ b/src/app/(app)/sessions/page.tsx @@ -16,12 +16,7 @@ import { } from "lucide-react"; import { formatDistanceToNow } from "date-fns"; import type { AgentSession, AgentStatusResponse } from "@/lib/claude/types"; - -function formatTokens(n: number): string { - if (n >= 1_000_000) return `${(n / 1_000_000).toFixed(1)}M`; - if (n >= 1_000) return `${(n / 1_000).toFixed(1)}K`; - return String(n); -} +import { formatTokens } from "@/lib/utils/format"; function totalTokens(session: AgentSession): number { const t = session.tokenUsage; diff --git a/src/app/(app)/settings/page.tsx b/src/app/(app)/settings/page.tsx index 12ea989..4affffc 100644 --- a/src/app/(app)/settings/page.tsx +++ b/src/app/(app)/settings/page.tsx @@ -1,454 +1,13 @@ "use client"; -import { useEffect, useState } from "react"; -import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card"; -import { Button } from "@/components/ui/button"; import { getVersion } from "@/lib/version"; -import { - Plus, - Trash2, - Power, - PowerOff, - Calendar, - Mail, - FolderOpen, - Github, - Globe, - Database, - MessageSquare, - Search, - Terminal, - ChevronDown, - ChevronUp, - X, - Check, - Clock, - Eye, - EyeOff, - Key, - Loader2, -} from "lucide-react"; - -// --- Preset MCP Servers --- - -interface MCPPreset { - name: string; - description: string; - icon: React.ReactNode; - command: string; - args: string[]; - envKeys: { key: string; label: string; placeholder: string }[]; -} - -const MCP_PRESETS: MCPPreset[] = [ - { - name: "google-calendar", - description: "Read and manage Google Calendar events", - icon: , - command: "npx", - args: ["-y", "@cocal/google-calendar-mcp"], - envKeys: [ - { - key: "GOOGLE_OAUTH_CREDENTIALS", - label: "OAuth credentials JSON path", - placeholder: "~/.config/gcp-oauth.keys.json", - }, - ], - }, - { - name: "gmail", - description: "Read and send emails via Gmail", - icon: , - command: "npx", - args: ["-y", "@anthropic/gmail-mcp"], - envKeys: [], - }, - { - name: "google-drive", - description: "Access and search Google Drive files", - icon: , - command: "npx", - args: ["-y", "@anthropic/google-drive-mcp"], - envKeys: [], - }, - { - name: "github", - description: "Manage repos, issues, and pull requests", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-github"], - envKeys: [ - { - key: "GITHUB_PERSONAL_ACCESS_TOKEN", - label: "GitHub Token", - placeholder: "ghp_...", - }, - ], - }, - { - name: "slack", - description: "Read and send Slack messages", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-slack"], - envKeys: [ - { - key: "SLACK_BOT_TOKEN", - label: "Bot Token", - placeholder: "xoxb-...", - }, - ], - }, - { - name: "brave-search", - description: "Web search via Brave Search API", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-brave-search"], - envKeys: [ - { - key: "BRAVE_API_KEY", - label: "Brave API Key", - placeholder: "BSA...", - }, - ], - }, - { - name: "filesystem", - description: "Read and write local files", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-filesystem", "/Users"], - envKeys: [], - }, - { - name: "postgres", - description: "Query PostgreSQL databases", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-postgres"], - envKeys: [ - { - key: "POSTGRES_CONNECTION_STRING", - label: "Connection String", - placeholder: "postgresql://user:pass@host/db", - }, - ], - }, - { - name: "fetch", - description: "Fetch and read web page content", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-fetch"], - envKeys: [], - }, - { - name: "puppeteer", - description: "Browser automation and screenshots", - icon: , - command: "npx", - args: ["-y", "@modelcontextprotocol/server-puppeteer"], - envKeys: [], - }, -]; - -// --- Types --- - -interface MCPServer { - id: string; - name: string; - command: string; - args: string[]; - env: Record; - enabled: boolean; -} +import { MCPServersSection } from "@/components/settings/mcp-servers-section"; +import { SessionRetentionSection } from "@/components/settings/session-retention-section"; +import { EnvKeysSection } from "@/components/settings/env-keys-section"; const version = getVersion(); -const inputClasses = - "w-full rounded-lg border border-border bg-background px-3 py-2 text-sm text-foreground outline-none transition-colors placeholder:text-muted focus:border-border-hover input-focus"; - -function presetButtonClass(installed: boolean, isConfiguring: boolean): string { - if (installed) return "cursor-default border-border/50 opacity-40"; - if (isConfiguring) return "border-accent/30 bg-accent-glow"; - return "border-border hover:border-border-hover hover:bg-surface-hover"; -} - -function presetIconClass(installed: boolean, isConfiguring: boolean): string { - if (installed) return "text-muted"; - if (isConfiguring) return "text-accent"; - return "text-muted-foreground"; -} - export default function SettingsPage() { - const [servers, setServers] = useState([]); - const [showCustom, setShowCustom] = useState(false); - const [addingPreset, setAddingPreset] = useState(null); - const [presetEnvValues, setPresetEnvValues] = useState>({}); - const [newServer, setNewServer] = useState({ - name: "", - command: "", - args: "", - env: "", - }); - - // Session retention state - const [sessionRetentionDays, setSessionRetentionDays] = useState(""); - const [sessionRetentionSaving, setSessionRetentionSaving] = useState(false); - const [sessionRetentionSaved, setSessionRetentionSaved] = useState(false); - - // API Keys (env file) state - const [envKeys, setEnvKeys] = useState>({}); - const [envExists, setEnvExists] = useState(false); - const [envEditing, setEnvEditing] = useState>({}); - const [envSaving, setEnvSaving] = useState(false); - const [envSaved, setEnvSaved] = useState(false); - const [envRevealed, setEnvRevealed] = useState>({}); - const [showAddKey, setShowAddKey] = useState(false); - const [newKeyName, setNewKeyName] = useState(""); - const [newKeyValue, setNewKeyValue] = useState(""); - const [newKeyError, setNewKeyError] = useState(null); - - useEffect(() => { - fetchServers(); - fetchAppSettings(); - fetchEnvKeys(); - }, []); - - const fetchServers = async () => { - try { - const res = await fetch("/api/mcp/servers"); - if (res.ok) { - setServers(await res.json()); - } - } catch (error) { - console.error("Error fetching servers:", error); - } - }; - - const fetchEnvKeys = async () => { - try { - const res = await fetch("/api/settings/env"); - if (res.ok) { - const data = await res.json(); - setEnvExists(data.exists); - setEnvKeys(data.keys || {}); - } - } catch (error) { - console.error("Error fetching env keys:", error); - } - }; - - const revealEnvKey = async (key: string) => { - if (envRevealed[key]) { - // Toggle off — just hide it - setEnvRevealed({ ...envRevealed, [key]: false }); - return; - } - try { - const res = await fetch(`/api/settings/env?unmask=${key}`); - if (res.ok) { - const data = await res.json(); - const keyData = data.keys?.[key]; - if (keyData?.value) { - setEnvKeys((prev) => ({ - ...prev, - [key]: { ...prev[key], value: keyData.value }, - })); - setEnvRevealed({ ...envRevealed, [key]: true }); - } - } - } catch (error) { - console.error("Error revealing key:", error); - } - }; - - const saveEnvKeys = async () => { - // Only send keys where the user actually typed a value - const dirty = Object.fromEntries( - Object.entries(envEditing).filter(([, v]) => v.length > 0) - ); - if (Object.keys(dirty).length === 0) return; - setEnvSaving(true); - setEnvSaved(false); - try { - const res = await fetch("/api/settings/env", { - method: "PATCH", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify(dirty), - }); - if (res.ok) { - setEnvSaved(true); - setEnvEditing({}); - setEnvRevealed({}); - fetchEnvKeys(); - setTimeout(() => setEnvSaved(false), 3000); - } - } catch (error) { - console.error("Error saving env keys:", error); - } - setEnvSaving(false); - }; - - const addCustomKey = async () => { - const key = newKeyName.trim().toUpperCase(); - if (!key) return; - if (!/^[A-Z][A-Z0-9_]*$/.test(key)) { - setNewKeyError("Use UPPER_SNAKE_CASE (e.g. MY_API_KEY)"); - return; - } - if (!newKeyValue.trim()) { - setNewKeyError("Value is required"); - return; - } - setNewKeyError(null); - setEnvSaving(true); - try { - const res = await fetch("/api/settings/env", { - method: "PATCH", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ [key]: newKeyValue }), - }); - if (res.ok) { - setNewKeyName(""); - setNewKeyValue(""); - setShowAddKey(false); - setEnvSaved(true); - fetchEnvKeys(); - setTimeout(() => setEnvSaved(false), 3000); - } else { - const data = await res.json(); - setNewKeyError(data.error || "Failed to add key"); - } - } catch { - setNewKeyError("Failed to add key"); - } - setEnvSaving(false); - }; - - const fetchAppSettings = async () => { - try { - const res = await fetch("/api/settings"); - if (res.ok) { - const data = await res.json(); - if (data.session_retention_days) { - setSessionRetentionDays(data.session_retention_days); - } - } - } catch (error) { - console.error("Error fetching app settings:", error); - } - }; - - const saveSessionRetention = async () => { - setSessionRetentionSaving(true); - setSessionRetentionSaved(false); - try { - const res = await fetch("/api/settings", { - method: "PATCH", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - session_retention_days: sessionRetentionDays || "", - }), - }); - if (res.ok) { - setSessionRetentionSaved(true); - setTimeout(() => setSessionRetentionSaved(false), 2000); - } - } catch (error) { - console.error("Error saving session retention:", error); - } - setSessionRetentionSaving(false); - }; - - const addFromPreset = async (preset: MCPPreset) => { - if (preset.envKeys.length > 0 && addingPreset?.name !== preset.name) { - setAddingPreset(preset); - setPresetEnvValues({}); - return; - } - - const env: Record = {}; - for (const ek of preset.envKeys) { - if (presetEnvValues[ek.key]) { - env[ek.key] = presetEnvValues[ek.key]; - } - } - - try { - const res = await fetch("/api/mcp/servers", { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - name: preset.name, - command: preset.command, - args: preset.args, - env, - }), - }); - - if (res.ok) { - setAddingPreset(null); - setPresetEnvValues({}); - fetchServers(); - } - } catch (error) { - console.error("Error adding server:", error); - } - }; - - const addCustomServer = async () => { - if (!newServer.name || !newServer.command) return; - - try { - const res = await fetch("/api/mcp/servers", { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - name: newServer.name, - command: newServer.command, - args: newServer.args - ? newServer.args.split(" ").filter(Boolean) - : [], - env: newServer.env ? JSON.parse(newServer.env) : {}, - }), - }); - - if (res.ok) { - setNewServer({ name: "", command: "", args: "", env: "" }); - setShowCustom(false); - fetchServers(); - } - } catch (error) { - console.error("Error adding server:", error); - } - }; - - const toggleServer = async (id: string, enabled: boolean) => { - try { - await fetch(`/api/mcp/servers/${id}`, { - method: "PATCH", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ enabled: !enabled }), - }); - fetchServers(); - } catch (error) { - console.error("Error toggling server:", error); - } - }; - - const deleteServer = async (id: string) => { - try { - await fetch(`/api/mcp/servers/${id}`, { method: "DELETE" }); - fetchServers(); - } catch (error) { - console.error("Error deleting server:", error); - } - }; - - const installedNames = new Set(servers.map((s) => s.name)); - return (
@@ -459,524 +18,10 @@ export default function SettingsPage() {

- {/* Active Servers */} - {servers.length > 0 && ( - - - Active Integrations - - - {servers.map((server) => ( -
-
-
- - - {server.name} - -
-

- {server.command} {(server.args || []).join(" ")} -

-
-
- - -
-
- ))} -
-
- )} - - {/* Preset Library */} - - - Integration Library - - -
- {MCP_PRESETS.map((preset, idx) => { - const installed = installedNames.has(preset.name); - const isConfiguring = addingPreset?.name === preset.name; - - return ( -
- - - {/* Inline env config form */} - {isConfiguring && ( -
- {preset.envKeys.map((ek) => ( -
- - - setPresetEnvValues({ - ...presetEnvValues, - [ek.key]: e.target.value, - }) - } - className={inputClasses} - /> -
- ))} -
- - -
-
- )} -
- ); - })} -
- - {/* Custom server toggle */} -
- - - {showCustom && ( -
- - setNewServer({ ...newServer, name: e.target.value }) - } - className={inputClasses} - /> - - setNewServer({ ...newServer, command: e.target.value }) - } - className={inputClasses} - /> - - setNewServer({ ...newServer, args: e.target.value }) - } - className={inputClasses} - /> - - setNewServer({ ...newServer, env: e.target.value }) - } - className={inputClasses} - /> -
- - -
-
- )} -
-
-
- - {/* Session Retention */} - - - - - Session Retention - - - -

- Sessions older than the specified days will be automatically cleaned up. - Leave empty to preserve all sessions. -

-
- -
- { - const v = e.target.value; - if (v === "" || (parseInt(v, 10) >= 1 && !v.includes("."))) { - setSessionRetentionDays(v); - } - }} - className={`${inputClasses} max-w-[140px]`} - /> -
- - {sessionRetentionDays && ( - - )} -
-
- {sessionRetentionSaved && ( -

- Session retention setting saved -

- )} - {sessionRetentionDays && ( -

- Sessions older than {sessionRetentionDays} day{sessionRetentionDays !== "1" ? "s" : ""} will be cleaned up -

- )} -
-
-
- - {/* API Keys - live env editor */} - - - - - API Keys - - - - {!envExists ? ( -
- Env file not found at /etc/dobby/env. - Run make install to set up Dobby as a service. -
- ) : ( - <> -

- Manage your API keys from the environment file at{" "} - - /etc/dobby/env - -

-
- {(() => { - const knownLabels: Record = { - GEMINI_API_KEY: "Gemini (default)", - OPENAI_API_KEY: "OpenAI", - ANTHROPIC_API_KEY: "Anthropic", - }; - // Show known keys first (even if not set), then any extra keys from env - const knownKeys = ["GEMINI_API_KEY", "OPENAI_API_KEY", "ANTHROPIC_API_KEY"]; - const extraKeys = Object.keys(envKeys).filter(k => !knownKeys.includes(k)).sort(); - const allKeys = [...knownKeys, ...extraKeys]; - - return allKeys.map((key) => { - const info = envKeys[key]; - const label = knownLabels[key] || key; - const isEditing = key in envEditing; - const isRevealed = envRevealed[key] || false; - - const displayValue = isEditing - ? envEditing[key] - : isRevealed && info?.value - ? info.value - : ""; - - return ( -
-
- {label} - - {info?.set ? "configured" : "not set"} - -
- {!isEditing && !isRevealed && info?.set && ( -
- {info.masked} -
- )} -
-
- - setEnvEditing({ ...envEditing, [key]: e.target.value }) - } - className={inputClasses} - /> -
- {info?.set && ( - - )} - {info?.set && !isEditing && ( - - )} - {isEditing && ( - - )} -
-
- ); - }); - })()} -
- - {/* Save button for inline edits */} - {Object.values(envEditing).some(v => v.length > 0) && ( -
- - -
- )} - - {/* Add custom key */} -
- {showAddKey ? ( -
-
- - { setNewKeyName(e.target.value.toUpperCase()); setNewKeyError(null); }} - className={inputClasses} - autoFocus - /> -
-
- - { setNewKeyValue(e.target.value); setNewKeyError(null); }} - className={inputClasses} - /> -
- {newKeyError && ( -

{newKeyError}

- )} -
- - -
-
- ) : ( - - )} -
- - {envSaved && ( -
- Setting saved. Restart Dobby to apply changes. -
- )} + + + -

- Dobby's default provider is Gemini. Changes require a restart to take effect. -

- - )} -
-
{/* Version */}
{version.tag && Version {version.tag}} diff --git a/src/app/api/agents/[agentId]/route.ts b/src/app/api/agents/[agentId]/route.ts index 9bbac55..c05a012 100644 --- a/src/app/api/agents/[agentId]/route.ts +++ b/src/app/api/agents/[agentId]/route.ts @@ -5,7 +5,7 @@ import { eq, and } from "drizzle-orm"; import { updateAgentSchema } from "@/lib/validations/agent"; import { syncCrontab } from "@/lib/cron/sync"; import { maskToken } from "@/lib/notifications/telegram"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -50,14 +50,8 @@ export const PATCH = withErrorHandler(async (request, { params }) => { const { agentId } = await params; const body = await request.json(); - const parsed = updateAgentSchema.safeParse(body); - - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } + const { data: parsed, error } = parseBody(body, updateAgentSchema); + if (error) return error; // Get current agent to check for name change const current = await db @@ -71,12 +65,12 @@ export const PATCH = withErrorHandler(async (request, { params }) => { } // Check name uniqueness within project if renaming - if (parsed.data.name && parsed.data.name !== current[0].name) { + if (parsed.name && parsed.name !== current[0].name) { const nameConflict = await db .select({ id: agents.id }) .from(agents) .where( - and(eq(agents.projectId, current[0].projectId), eq(agents.name, parsed.data.name)) + and(eq(agents.projectId, current[0].projectId), eq(agents.name, parsed.name)) ) .limit(1); if (nameConflict.length > 0 && nameConflict[0].id !== agentId) { @@ -89,18 +83,18 @@ export const PATCH = withErrorHandler(async (request, { params }) => { // Map validated fields to DB columns const updates: Record = { updatedAt: new Date() }; - if (parsed.data.name !== undefined) updates.name = parsed.data.name; - if (parsed.data.soul !== undefined) updates.soul = parsed.data.soul; - if (parsed.data.skill !== undefined) updates.skill = parsed.data.skill; - if (parsed.data.schedule !== undefined) updates.schedule = parsed.data.schedule; - if (parsed.data.timezone !== undefined) updates.timezone = parsed.data.timezone; - if (parsed.data.envVars !== undefined) { + if (parsed.name !== undefined) updates.name = parsed.name; + if (parsed.soul !== undefined) updates.soul = parsed.soul; + if (parsed.skill !== undefined) updates.skill = parsed.skill; + if (parsed.schedule !== undefined) updates.schedule = parsed.schedule; + if (parsed.timezone !== undefined) updates.timezone = parsed.timezone; + if (parsed.envVars !== undefined) { // Preserve original env var values when the submitted value is masked. // The GET endpoint masks values (e.g. "post****uire"), so edits that // don't touch env vars would otherwise overwrite real credentials. const originalEnvVars = (current[0].envVars as Record) || {}; const merged: Record = {}; - for (const [k, v] of Object.entries(parsed.data.envVars)) { + for (const [k, v] of Object.entries(parsed.envVars)) { if (k in originalEnvVars) { // Check if the submitted value exactly matches the masked form // the GET endpoint would produce for the original value. @@ -114,7 +108,7 @@ export const PATCH = withErrorHandler(async (request, { params }) => { } updates.envVars = merged; } - if (parsed.data.enabled !== undefined) updates.enabled = parsed.data.enabled; + if (parsed.enabled !== undefined) updates.enabled = parsed.enabled; const [updated] = await db .update(agents) @@ -123,18 +117,18 @@ export const PATCH = withErrorHandler(async (request, { params }) => { .returning(); // If name changed, update agentRuns to maintain linkage - if (parsed.data.name && parsed.data.name !== current[0].name) { + if (parsed.name && parsed.name !== current[0].name) { await db .update(agentRuns) - .set({ agentName: parsed.data.name }) + .set({ agentName: parsed.name }) .where(eq(agentRuns.agentId, agentId)); } // Re-sync crontab if schedule, name, or enabled state changed if ( - parsed.data.schedule !== undefined || - parsed.data.name !== undefined || - parsed.data.enabled !== undefined + parsed.schedule !== undefined || + parsed.name !== undefined || + parsed.enabled !== undefined ) { syncCrontab(); } diff --git a/src/app/api/agents/[agentId]/telegram/route.ts b/src/app/api/agents/[agentId]/telegram/route.ts index 3953611..5580527 100644 --- a/src/app/api/agents/[agentId]/telegram/route.ts +++ b/src/app/api/agents/[agentId]/telegram/route.ts @@ -1,8 +1,6 @@ import { NextResponse } from "next/server"; -import { db } from "@/lib/db"; -import { notificationConfigs } from "@/lib/db/schema"; -import { eq } from "drizzle-orm"; import { maskToken } from "@/lib/notifications/telegram"; +import { upsertNotificationConfig, getNotificationConfig, deleteNotificationConfig } from "@/lib/db/notification-config"; import { withErrorHandler } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -14,13 +12,7 @@ function channelKey(agentId: string) { export const GET = withErrorHandler(async (_request, { params }) => { const { agentId } = await params; - const rows = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, channelKey(agentId))) - .limit(1); - - const config = rows[0]; + const config = getNotificationConfig(channelKey(agentId)); if (!config) { return NextResponse.json({ configured: false, @@ -54,50 +46,17 @@ export const POST = withErrorHandler(async (request, { params }) => { ); } - const channel = channelKey(agentId); - const configData = { - bot_token: botToken, - chat_id: chatId, - bot_name: botName || "", - }; - - const rows = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, channel)) - .limit(1); - - const existing = rows[0]; + const id = upsertNotificationConfig( + channelKey(agentId), + { bot_token: botToken, chat_id: chatId, bot_name: botName || "" }, + enabled ?? true + ); - if (existing) { - const [updated] = await db - .update(notificationConfigs) - .set({ - enabled: enabled ?? true, - config: configData, - updatedAt: new Date(), - }) - .where(eq(notificationConfigs.id, existing.id)) - .returning(); - return NextResponse.json({ success: true, id: updated.id }); - } else { - const [created] = await db - .insert(notificationConfigs) - .values({ - channel, - enabled: enabled ?? true, - config: configData, - }) - .returning(); - return NextResponse.json({ success: true, id: created.id }); - } + return NextResponse.json({ success: true, id }); }); export const DELETE = withErrorHandler(async (_request, { params }) => { const { agentId } = await params; - - await db - .delete(notificationConfigs) - .where(eq(notificationConfigs.channel, channelKey(agentId))); + deleteNotificationConfig(channelKey(agentId)); return NextResponse.json({ success: true }); }); diff --git a/src/app/api/issues/[id]/cleanup/route.ts b/src/app/api/issues/[id]/cleanup/route.ts index fb8b95f..fc5d27f 100644 --- a/src/app/api/issues/[id]/cleanup/route.ts +++ b/src/app/api/issues/[id]/cleanup/route.ts @@ -1,9 +1,9 @@ import { NextResponse } from "next/server"; -import { execFileSync } from "node:child_process"; import { db } from "@/lib/db"; import { issues, repositories } from "@/lib/db/schema"; import { eq } from "drizzle-orm"; import { withErrorHandler } from "@/lib/api/utils"; +import { removeWorktree } from "@/lib/issues/git-worktree"; export const runtime = "nodejs"; @@ -33,16 +33,7 @@ export const POST = withErrorHandler(async ( .where(eq(repositories.id, issue.repositoryId)) .limit(1); - if (repo) { - try { - execFileSync("git", ["worktree", "remove", issue.worktreePath, "--force"], { - cwd: repo.localRepoPath, stdio: "ignore", - }); - execFileSync("git", ["worktree", "prune"], { cwd: repo.localRepoPath, stdio: "ignore" }); - } catch { - // Worktree may already be gone - } - } + if (repo) removeWorktree(issue.worktreePath, repo.localRepoPath); await db.update(issues).set({ worktreePath: null, diff --git a/src/app/api/issues/[id]/route.ts b/src/app/api/issues/[id]/route.ts index 4f3f94b..271dbd1 100644 --- a/src/app/api/issues/[id]/route.ts +++ b/src/app/api/issues/[id]/route.ts @@ -1,11 +1,11 @@ import { NextResponse } from "next/server"; -import { execFileSync } from "node:child_process"; import { db } from "@/lib/db"; import { issues, issueMessages, repositories } from "@/lib/db/schema"; import { eq, and, inArray } from "drizzle-orm"; import { withErrorHandler } from "@/lib/api/utils"; import { getIssueAttachments, deleteIssueAttachmentFiles } from "@/lib/issues/attachments"; import { refreshPrStatus } from "@/lib/issues/pr-status"; +import { removeWorktree } from "@/lib/issues/git-worktree"; export const runtime = "nodejs"; @@ -179,18 +179,9 @@ export const DELETE = withErrorHandler(async ( // Clean up worktree if it exists if (issue.worktreePath) { - try { - const [repo] = await db.select().from(repositories) - .where(eq(repositories.id, issue.repositoryId)).limit(1); - if (repo) { - execFileSync("git", ["worktree", "remove", issue.worktreePath, "--force"], { - cwd: repo.localRepoPath, stdio: "ignore", - }); - execFileSync("git", ["worktree", "prune"], { cwd: repo.localRepoPath, stdio: "ignore" }); - } - } catch { - // Best effort cleanup - } + const [repo] = await db.select().from(repositories) + .where(eq(repositories.id, issue.repositoryId)).limit(1); + if (repo) removeWorktree(issue.worktreePath, repo.localRepoPath); } // Clean up attachment files from disk (DB records cascade-deleted) diff --git a/src/app/api/issues/cleanup/route.ts b/src/app/api/issues/cleanup/route.ts index b3bcd23..e79e167 100644 --- a/src/app/api/issues/cleanup/route.ts +++ b/src/app/api/issues/cleanup/route.ts @@ -1,9 +1,9 @@ import { NextResponse } from "next/server"; -import { execFileSync } from "node:child_process"; import { db } from "@/lib/db"; import { issues, repositories } from "@/lib/db/schema"; import { eq, and, isNotNull, isNull, inArray } from "drizzle-orm"; import { withErrorHandler } from "@/lib/api/utils"; +import { forceRemoveWorktree, pruneWorktrees } from "@/lib/issues/git-worktree"; export const runtime = "nodejs"; @@ -57,13 +57,7 @@ export const POST = withErrorHandler(async (request: Request) => { const repoPath = repoIssues[0].localRepoPath; for (const issue of repoIssues) { - try { - execFileSync("git", ["worktree", "remove", issue.worktreePath!, "--force"], { - cwd: repoPath, stdio: "ignore", - }); - } catch { - // Worktree may already be gone from disk — still clear DB - } + forceRemoveWorktree(issue.worktreePath!, repoPath); try { await db.update(issues).set({ @@ -76,12 +70,7 @@ export const POST = withErrorHandler(async (request: Request) => { } } - // Prune once per repo - try { - execFileSync("git", ["worktree", "prune"], { cwd: repoPath, stdio: "ignore" }); - } catch { - // Best-effort prune - } + pruneWorktrees(repoPath); } return NextResponse.json({ cleaned, errors }); diff --git a/src/app/api/issues/projects/[id]/route.ts b/src/app/api/issues/projects/[id]/route.ts index be35cf8..b12a2c2 100644 --- a/src/app/api/issues/projects/[id]/route.ts +++ b/src/app/api/issues/projects/[id]/route.ts @@ -5,7 +5,8 @@ import { db } from "@/lib/db"; import { repositories, issues } from "@/lib/db/schema"; import { eq, count, and, isNotNull, isNull } from "drizzle-orm"; import { updateRepositorySchema } from "@/lib/validations/repository"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; +import { forceRemoveWorktree, pruneWorktrees } from "@/lib/issues/git-worktree"; export const runtime = "nodejs"; @@ -56,20 +57,14 @@ export const PATCH = withErrorHandler(async ( const { id } = await params; const body = await request.json(); - const parsed = updateRepositorySchema.safeParse(body); + const { data: parsed, error } = parseBody(body, updateRepositorySchema); + if (error) return error; - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } - - if (parsed.data.name) { + if (parsed.name) { const existing = await db .select({ id: repositories.id }) .from(repositories) - .where(eq(repositories.name, parsed.data.name)) + .where(eq(repositories.name, parsed.name)) .limit(1); if (existing.length > 0 && existing[0].id !== id) { @@ -81,22 +76,22 @@ export const PATCH = withErrorHandler(async ( } // Validate localRepoPath if being changed - if (parsed.data.localRepoPath) { - if (!existsSync(parsed.data.localRepoPath)) { + if (parsed.localRepoPath) { + if (!existsSync(parsed.localRepoPath)) { return NextResponse.json({ error: "Local repo path does not exist" }, { status: 400 }); } try { - execFileSync("git", ["rev-parse", "--git-dir"], { cwd: parsed.data.localRepoPath, stdio: "ignore" }); + execFileSync("git", ["rev-parse", "--git-dir"], { cwd: parsed.localRepoPath, stdio: "ignore" }); } catch { return NextResponse.json({ error: "Path is not a git repository" }, { status: 400 }); } } const updateData: Record = { updatedAt: new Date() }; - if (parsed.data.name !== undefined) updateData.name = parsed.data.name; - if (parsed.data.githubRepoUrl !== undefined) updateData.githubRepoUrl = parsed.data.githubRepoUrl || null; - if (parsed.data.localRepoPath !== undefined) updateData.localRepoPath = parsed.data.localRepoPath; - if (parsed.data.defaultBranch !== undefined) updateData.defaultBranch = parsed.data.defaultBranch; + if (parsed.name !== undefined) updateData.name = parsed.name; + if (parsed.githubRepoUrl !== undefined) updateData.githubRepoUrl = parsed.githubRepoUrl || null; + if (parsed.localRepoPath !== undefined) updateData.localRepoPath = parsed.localRepoPath; + if (parsed.defaultBranch !== undefined) updateData.defaultBranch = parsed.defaultBranch; const [updated] = await db .update(repositories) @@ -140,17 +135,9 @@ export const DELETE = withErrorHandler(async ( if (repo) { for (const issue of repoIssues) { - if (issue.worktreePath) { - try { - execFileSync("git", ["worktree", "remove", issue.worktreePath, "--force"], { - cwd: repo.localRepoPath, stdio: "ignore", - }); - } catch { /* best effort */ } - } + if (issue.worktreePath) forceRemoveWorktree(issue.worktreePath, repo.localRepoPath); } - try { - execFileSync("git", ["worktree", "prune"], { cwd: repo.localRepoPath, stdio: "ignore" }); - } catch { /* best effort */ } + pruneWorktrees(repo.localRepoPath); } const deleted = await db diff --git a/src/app/api/issues/projects/route.ts b/src/app/api/issues/projects/route.ts index 7f52f7c..9bb853d 100644 --- a/src/app/api/issues/projects/route.ts +++ b/src/app/api/issues/projects/route.ts @@ -5,7 +5,7 @@ import { db } from "@/lib/db"; import { repositories, issues } from "@/lib/db/schema"; import { eq, count, desc } from "drizzle-orm"; import { createRepositorySchema } from "@/lib/validations/repository"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -42,20 +42,14 @@ export const GET = withErrorHandler(async () => { export const POST = withErrorHandler(async (request: Request) => { const body = await request.json(); - const parsed = createRepositorySchema.safeParse(body); - - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } + const { data: parsed, error } = parseBody(body, createRepositorySchema); + if (error) return error; // Check uniqueness const existing = await db .select({ id: repositories.id }) .from(repositories) - .where(eq(repositories.name, parsed.data.name)) + .where(eq(repositories.name, parsed.name)) .limit(1); if (existing.length > 0) { @@ -66,7 +60,7 @@ export const POST = withErrorHandler(async (request: Request) => { } // Verify local path exists - if (!existsSync(parsed.data.localRepoPath)) { + if (!existsSync(parsed.localRepoPath)) { return NextResponse.json( { error: "Local repo path does not exist" }, { status: 400 } @@ -75,7 +69,7 @@ export const POST = withErrorHandler(async (request: Request) => { // Verify it's a git repo try { - execFileSync("git", ["rev-parse", "--git-dir"], { cwd: parsed.data.localRepoPath, stdio: "ignore" }); + execFileSync("git", ["rev-parse", "--git-dir"], { cwd: parsed.localRepoPath, stdio: "ignore" }); } catch { return NextResponse.json( { error: "Path is not a git repository" }, @@ -86,10 +80,10 @@ export const POST = withErrorHandler(async (request: Request) => { const [repo] = await db .insert(repositories) .values({ - name: parsed.data.name, - githubRepoUrl: parsed.data.githubRepoUrl || null, - localRepoPath: parsed.data.localRepoPath, - defaultBranch: parsed.data.defaultBranch, + name: parsed.name, + githubRepoUrl: parsed.githubRepoUrl || null, + localRepoPath: parsed.localRepoPath, + defaultBranch: parsed.defaultBranch, }) .returning(); diff --git a/src/app/api/issues/route.ts b/src/app/api/issues/route.ts index c36800a..f4c48ef 100644 --- a/src/app/api/issues/route.ts +++ b/src/app/api/issues/route.ts @@ -5,7 +5,7 @@ import { eq, desc, and, isNull, isNotNull, count, type SQL } from "drizzle-orm"; import { ensurePollerRunning } from "@/lib/issues/poller-manager"; import { ensureSlackIssuesSocketRunning } from "@/lib/issues/slack-socket"; import { createIssueSchema } from "@/lib/validations/issue"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -94,20 +94,14 @@ export const GET = withErrorHandler(async (request: Request) => { export const POST = withErrorHandler(async (request: Request) => { const body = await request.json(); - const parsed = createIssueSchema.safeParse(body); - - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } + const { data: parsed, error } = parseBody(body, createIssueSchema); + if (error) return error; // Verify repository exists const [repo] = await db .select() .from(repositories) - .where(eq(repositories.id, parsed.data.repositoryId)) + .where(eq(repositories.id, parsed.repositoryId)) .limit(1); if (!repo) { @@ -115,9 +109,9 @@ export const POST = withErrorHandler(async (request: Request) => { } const [issue] = await db.insert(issues).values({ - repositoryId: parsed.data.repositoryId, - title: parsed.data.title, - description: parsed.data.description, + repositoryId: parsed.repositoryId, + title: parsed.title, + description: parsed.description, }).returning(); return NextResponse.json(issue, { status: 201 }); diff --git a/src/app/api/issues/slack/route.ts b/src/app/api/issues/slack/route.ts index 304236d..6b18bb6 100644 --- a/src/app/api/issues/slack/route.ts +++ b/src/app/api/issues/slack/route.ts @@ -1,7 +1,4 @@ import { NextResponse } from "next/server"; -import { db } from "@/lib/db"; -import { notificationConfigs } from "@/lib/db/schema"; -import { eq } from "drizzle-orm"; import { isValidSlackAppToken, isValidSlackBotToken, @@ -9,6 +6,7 @@ import { testSlackConnection, } from "@/lib/notifications/slack"; import { ensureSlackIssuesSocketRunning, stopSlackIssuesSocket } from "@/lib/issues/slack-socket"; +import { upsertNotificationConfig, getNotificationConfig, deleteNotificationConfig } from "@/lib/db/notification-config"; import { withErrorHandler } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -16,11 +14,7 @@ export const runtime = "nodejs"; const CHANNEL = "slack-issues"; export const GET = withErrorHandler(async () => { - const [config] = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, CHANNEL)) - .limit(1); + const config = getNotificationConfig(CHANNEL); if (!config) { return NextResponse.json({ configured: false }); @@ -59,29 +53,13 @@ export const POST = withErrorHandler(async (request: Request) => { await testSlackConnection(botToken, appToken, channelId || undefined); } - const [existing] = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, CHANNEL)) - .limit(1); - - const config = { + const config: Record = { bot_token: botToken, app_token: appToken, ...(channelId ? { channel_id: channelId } : {}), }; - if (existing) { - await db.update(notificationConfigs) - .set({ enabled: true, config, updatedAt: new Date() }) - .where(eq(notificationConfigs.id, existing.id)); - } else { - await db.insert(notificationConfigs).values({ - channel: CHANNEL, - enabled: true, - config, - }); - } + upsertNotificationConfig(CHANNEL, config); ensureSlackIssuesSocketRunning(); @@ -89,7 +67,7 @@ export const POST = withErrorHandler(async (request: Request) => { }); export const DELETE = withErrorHandler(async () => { - await db.delete(notificationConfigs).where(eq(notificationConfigs.channel, CHANNEL)); + deleteNotificationConfig(CHANNEL); stopSlackIssuesSocket(); return NextResponse.json({ success: true }); }); diff --git a/src/app/api/issues/telegram/route.ts b/src/app/api/issues/telegram/route.ts index 4bc6018..2326482 100644 --- a/src/app/api/issues/telegram/route.ts +++ b/src/app/api/issues/telegram/route.ts @@ -1,10 +1,8 @@ import { NextResponse } from "next/server"; -import { db } from "@/lib/db"; -import { notificationConfigs } from "@/lib/db/schema"; -import { eq } from "drizzle-orm"; import { maskToken, testTelegramNotification } from "@/lib/notifications/telegram"; import { ensurePollerRunning } from "@/lib/issues/poller-manager"; import { isValidBotToken } from "@/lib/telegram/api"; +import { upsertNotificationConfig, getNotificationConfig, deleteNotificationConfig } from "@/lib/db/notification-config"; import { withErrorHandler } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -12,11 +10,7 @@ export const runtime = "nodejs"; const CHANNEL = "telegram-issues"; export const GET = withErrorHandler(async () => { - const [config] = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, CHANNEL)) - .limit(1); + const config = getNotificationConfig(CHANNEL); if (!config) { return NextResponse.json({ configured: false }); @@ -60,29 +54,7 @@ export const POST = withErrorHandler(async (request: Request) => { } } - // Upsert config - const [existing] = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, CHANNEL)) - .limit(1); - - if (existing) { - await db - .update(notificationConfigs) - .set({ - enabled: true, - config: { bot_token: botToken, chat_id: chatId }, - updatedAt: new Date(), - }) - .where(eq(notificationConfigs.id, existing.id)); - } else { - await db.insert(notificationConfigs).values({ - channel: CHANNEL, - enabled: true, - config: { bot_token: botToken, chat_id: chatId }, - }); - } + upsertNotificationConfig(CHANNEL, { bot_token: botToken, chat_id: chatId }); // Start poller now that config is available ensurePollerRunning(); @@ -91,9 +63,6 @@ export const POST = withErrorHandler(async (request: Request) => { }); export const DELETE = withErrorHandler(async () => { - await db - .delete(notificationConfigs) - .where(eq(notificationConfigs.channel, CHANNEL)); - + deleteNotificationConfig(CHANNEL); return NextResponse.json({ success: true }); }); diff --git a/src/app/api/mcp/servers/[id]/route.ts b/src/app/api/mcp/servers/[id]/route.ts index 0d2ebb7..792e0cb 100644 --- a/src/app/api/mcp/servers/[id]/route.ts +++ b/src/app/api/mcp/servers/[id]/route.ts @@ -1,7 +1,8 @@ import { NextResponse } from "next/server"; import { db, mcpServers } from "@/lib/db"; import { eq } from "drizzle-orm"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; +import { mcpServerUpdateSchema } from "@/lib/validations/mcp"; export const runtime = "nodejs"; @@ -10,11 +11,13 @@ export const PATCH = withErrorHandler(async ( { params }: { params: Promise> } ) => { const { id } = await params; - const body = await request.json(); + const raw = await request.json(); + const { data: parsed, error } = parseBody(raw, mcpServerUpdateSchema); + if (error) return error; const [updated] = await db .update(mcpServers) - .set(body) + .set(parsed) .where(eq(mcpServers.id, id)) .returning(); diff --git a/src/app/api/notifications/telegram/bots/copy/route.ts b/src/app/api/notifications/telegram/bots/copy/route.ts index e54d840..212e8c8 100644 --- a/src/app/api/notifications/telegram/bots/copy/route.ts +++ b/src/app/api/notifications/telegram/bots/copy/route.ts @@ -3,6 +3,7 @@ import { db } from "@/lib/db"; import { notificationConfigs, agents } from "@/lib/db/schema"; import { eq } from "drizzle-orm"; import { maskToken } from "@/lib/notifications/telegram"; +import { upsertNotificationConfig } from "@/lib/db/notification-config"; import { withErrorHandler } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -55,36 +56,10 @@ export const POST = withErrorHandler(async (request: Request) => { ); } - const channel = `telegram-agent:${targetAgentId}`; - const configData = { - bot_token: srcCfg.bot_token, - chat_id: srcCfg.chat_id, - bot_name: srcCfg.bot_name || "", - }; - - // Upsert — check existing first (same pattern as existing telegram/route.ts) - const [existing] = await db - .select() - .from(notificationConfigs) - .where(eq(notificationConfigs.channel, channel)) - .limit(1); - - if (existing) { - await db - .update(notificationConfigs) - .set({ - enabled: true, - config: configData, - updatedAt: new Date(), - }) - .where(eq(notificationConfigs.id, existing.id)); - } else { - await db.insert(notificationConfigs).values({ - channel, - enabled: true, - config: configData, - }); - } + upsertNotificationConfig( + `telegram-agent:${targetAgentId}`, + { bot_token: srcCfg.bot_token, chat_id: srcCfg.chat_id, bot_name: srcCfg.bot_name || "" } + ); // Return masked config (same shape as GET /api/agents/{agentId}/telegram) return NextResponse.json({ diff --git a/src/app/api/projects/[id]/agents/route.ts b/src/app/api/projects/[id]/agents/route.ts index 28d935a..41f7c1e 100644 --- a/src/app/api/projects/[id]/agents/route.ts +++ b/src/app/api/projects/[id]/agents/route.ts @@ -5,7 +5,7 @@ import { eq, and, sql } from "drizzle-orm"; import { createAgentSchema } from "@/lib/validations/agent"; import { cronToHuman } from "@/lib/utils/cron"; import { syncCrontab } from "@/lib/cron/sync"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -84,21 +84,15 @@ export const POST = withErrorHandler(async (request, { params }) => { } const body = await request.json(); - const parsed = createAgentSchema.safeParse(body); - - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } + const { data: parsed, error } = parseBody(body, createAgentSchema); + if (error) return error; // Check uniqueness within project const existing = await db .select({ id: agents.id }) .from(agents) .where( - and(eq(agents.projectId, id), eq(agents.name, parsed.data.name)) + and(eq(agents.projectId, id), eq(agents.name, parsed.name)) ) .limit(1); @@ -113,13 +107,13 @@ export const POST = withErrorHandler(async (request, { params }) => { .insert(agents) .values({ projectId: id, - name: parsed.data.name, - soul: parsed.data.soul, - skill: parsed.data.skill, - schedule: parsed.data.schedule, - timezone: parsed.data.timezone || null, - envVars: parsed.data.envVars || {}, - enabled: parsed.data.enabled ?? true, + name: parsed.name, + soul: parsed.soul, + skill: parsed.skill, + schedule: parsed.schedule, + timezone: parsed.timezone || null, + envVars: parsed.envVars || {}, + enabled: parsed.enabled ?? true, }) .returning(); diff --git a/src/app/api/projects/[id]/route.ts b/src/app/api/projects/[id]/route.ts index 8d063a3..c49b234 100644 --- a/src/app/api/projects/[id]/route.ts +++ b/src/app/api/projects/[id]/route.ts @@ -3,7 +3,7 @@ import { db } from "@/lib/db"; import { projects, agents } from "@/lib/db/schema"; import { eq } from "drizzle-orm"; import { updateProjectSchema } from "@/lib/validations/project"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -47,21 +47,15 @@ export const PATCH = withErrorHandler(async (request, { params }) => { const { id } = await params; const body = await request.json(); - const parsed = updateProjectSchema.safeParse(body); - - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } + const { data: parsed, error } = parseBody(body, updateProjectSchema); + if (error) return error; // Check name uniqueness if changing name - if (parsed.data.name) { + if (parsed.name) { const existing = await db .select({ id: projects.id }) .from(projects) - .where(eq(projects.name, parsed.data.name)) + .where(eq(projects.name, parsed.name)) .limit(1); if (existing.length > 0 && existing[0].id !== id) { @@ -74,7 +68,7 @@ export const PATCH = withErrorHandler(async (request, { params }) => { const [updated] = await db .update(projects) - .set({ ...parsed.data, updatedAt: new Date() }) + .set({ ...parsed, updatedAt: new Date() }) .where(eq(projects.id, id)) .returning(); diff --git a/src/app/api/projects/route.ts b/src/app/api/projects/route.ts index a0b1aba..de03c03 100644 --- a/src/app/api/projects/route.ts +++ b/src/app/api/projects/route.ts @@ -3,7 +3,7 @@ import { db } from "@/lib/db"; import { projects, agents } from "@/lib/db/schema"; import { eq, count, desc } from "drizzle-orm"; import { createProjectSchema } from "@/lib/validations/project"; -import { withErrorHandler } from "@/lib/api/utils"; +import { withErrorHandler, parseBody } from "@/lib/api/utils"; export const runtime = "nodejs"; @@ -36,20 +36,14 @@ export const GET = withErrorHandler(async () => { export const POST = withErrorHandler(async (request) => { const body = await request.json(); - const parsed = createProjectSchema.safeParse(body); - - if (!parsed.success) { - return NextResponse.json( - { error: parsed.error.issues[0]?.message || "Invalid input" }, - { status: 400 } - ); - } + const { data: parsed, error } = parseBody(body, createProjectSchema); + if (error) return error; // Check uniqueness const existing = await db .select({ id: projects.id }) .from(projects) - .where(eq(projects.name, parsed.data.name)) + .where(eq(projects.name, parsed.name)) .limit(1); if (existing.length > 0) { @@ -62,8 +56,8 @@ export const POST = withErrorHandler(async (request) => { const [project] = await db .insert(projects) .values({ - name: parsed.data.name, - description: parsed.data.description || null, + name: parsed.name, + description: parsed.description || null, }) .returning(); diff --git a/src/components/agents/agent-form.tsx b/src/components/agents/agent-form.tsx index 42a45eb..5e72b99 100644 --- a/src/components/agents/agent-form.tsx +++ b/src/components/agents/agent-form.tsx @@ -6,6 +6,7 @@ import { cronToHuman } from "@/lib/utils/cron"; import { ClaudePanel } from "./claude-panel"; import { EnvVarsEditor } from "./env-vars-editor"; import { MarkdownEditor } from "@/components/ui/markdown-editor"; +import { inputClasses, labelClasses } from "@/components/shared/form-classes"; export interface AgentFormData { name: string; @@ -23,11 +24,6 @@ interface AgentFormProps { submitLabel: string; } -const inputClasses = - "w-full border border-border bg-background px-3 py-2 text-[15px] font-mono text-foreground placeholder:text-muted/40 outline-none transition-all focus:border-accent input-focus"; - -const labelClasses = "block text-[12px] font-mono text-muted uppercase tracking-widest mb-1"; - export function AgentForm({ initialValues, onSubmit, submitLabel }: AgentFormProps) { const [form, setForm] = useState({ name: initialValues?.name || "", diff --git a/src/components/agents/env-vars-editor.tsx b/src/components/agents/env-vars-editor.tsx index 7d43e3b..9678f24 100644 --- a/src/components/agents/env-vars-editor.tsx +++ b/src/components/agents/env-vars-editor.tsx @@ -2,9 +2,7 @@ import { useState } from "react"; import { Plus, Trash2, Eye, EyeOff } from "lucide-react"; - -const inputClasses = - "w-full border border-border bg-background px-3 py-2 text-[15px] font-mono text-foreground placeholder:text-muted/40 outline-none transition-all focus:border-accent input-focus"; +import { inputClasses } from "@/components/shared/form-classes"; type EnvVarEntry = { key: string; value: string }; diff --git a/src/components/settings/env-keys-section.tsx b/src/components/settings/env-keys-section.tsx new file mode 100644 index 0000000..2021a70 --- /dev/null +++ b/src/components/settings/env-keys-section.tsx @@ -0,0 +1,342 @@ +"use client"; + +import { useEffect, useState } from "react"; +import { Key, Eye, EyeOff, Trash2, Plus, X, Loader2 } from "lucide-react"; +import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card"; +import { Button } from "@/components/ui/button"; +import { settingsInputClasses as inputClasses } from "@/components/shared/form-classes"; + +export function EnvKeysSection() { + const [envKeys, setEnvKeys] = useState>({}); + const [envExists, setEnvExists] = useState(false); + const [envEditing, setEnvEditing] = useState>({}); + const [envSaving, setEnvSaving] = useState(false); + const [envSaved, setEnvSaved] = useState(false); + const [envRevealed, setEnvRevealed] = useState>({}); + const [showAddKey, setShowAddKey] = useState(false); + const [newKeyName, setNewKeyName] = useState(""); + const [newKeyValue, setNewKeyValue] = useState(""); + const [newKeyError, setNewKeyError] = useState(null); + + useEffect(() => { + fetchEnvKeys(); + }, []); + + const fetchEnvKeys = async () => { + try { + const res = await fetch("/api/settings/env"); + if (res.ok) { + const data = await res.json(); + setEnvExists(data.exists); + setEnvKeys(data.keys || {}); + } + } catch (error) { + console.error("Error fetching env keys:", error); + } + }; + + const revealEnvKey = async (key: string) => { + if (envRevealed[key]) { + // Toggle off — just hide it + setEnvRevealed({ ...envRevealed, [key]: false }); + return; + } + try { + const res = await fetch(`/api/settings/env?unmask=${key}`); + if (res.ok) { + const data = await res.json(); + const keyData = data.keys?.[key]; + if (keyData?.value) { + setEnvKeys((prev) => ({ + ...prev, + [key]: { ...prev[key], value: keyData.value }, + })); + setEnvRevealed({ ...envRevealed, [key]: true }); + } + } + } catch (error) { + console.error("Error revealing key:", error); + } + }; + + const saveEnvKeys = async () => { + // Only send keys where the user actually typed a value + const dirty = Object.fromEntries( + Object.entries(envEditing).filter(([, v]) => v.length > 0) + ); + if (Object.keys(dirty).length === 0) return; + setEnvSaving(true); + setEnvSaved(false); + try { + const res = await fetch("/api/settings/env", { + method: "PATCH", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify(dirty), + }); + if (res.ok) { + setEnvSaved(true); + setEnvEditing({}); + setEnvRevealed({}); + fetchEnvKeys(); + setTimeout(() => setEnvSaved(false), 3000); + } + } catch (error) { + console.error("Error saving env keys:", error); + } + setEnvSaving(false); + }; + + const addCustomKey = async () => { + const key = newKeyName.trim().toUpperCase(); + if (!key) return; + if (!/^[A-Z][A-Z0-9_]*$/.test(key)) { + setNewKeyError("Use UPPER_SNAKE_CASE (e.g. MY_API_KEY)"); + return; + } + if (!newKeyValue.trim()) { + setNewKeyError("Value is required"); + return; + } + setNewKeyError(null); + setEnvSaving(true); + try { + const res = await fetch("/api/settings/env", { + method: "PATCH", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ [key]: newKeyValue }), + }); + if (res.ok) { + setNewKeyName(""); + setNewKeyValue(""); + setShowAddKey(false); + setEnvSaved(true); + fetchEnvKeys(); + setTimeout(() => setEnvSaved(false), 3000); + } else { + const data = await res.json(); + setNewKeyError(data.error || "Failed to add key"); + } + } catch { + setNewKeyError("Failed to add key"); + } + setEnvSaving(false); + }; + + return ( + + + + + API Keys + + + + {!envExists ? ( +
+ Env file not found at /etc/dobby/env. + Run make install to set up Dobby as a service. +
+ ) : ( + <> +

+ Manage your API keys from the environment file at{" "} + + /etc/dobby/env + +

+
+ {(() => { + const knownLabels: Record = { + GEMINI_API_KEY: "Gemini (default)", + OPENAI_API_KEY: "OpenAI", + ANTHROPIC_API_KEY: "Anthropic", + }; + // Show known keys first (even if not set), then any extra keys from env + const knownKeys = ["GEMINI_API_KEY", "OPENAI_API_KEY", "ANTHROPIC_API_KEY"]; + const extraKeys = Object.keys(envKeys).filter(k => !knownKeys.includes(k)).sort(); + const allKeys = [...knownKeys, ...extraKeys]; + + return allKeys.map((key) => { + const info = envKeys[key]; + const label = knownLabels[key] || key; + const isEditing = key in envEditing; + const isRevealed = envRevealed[key] || false; + + const displayValue = isEditing + ? envEditing[key] + : isRevealed && info?.value + ? info.value + : ""; + + return ( +
+
+ {label} + + {info?.set ? "configured" : "not set"} + +
+ {!isEditing && !isRevealed && info?.set && ( +
+ {info.masked} +
+ )} +
+
+ + setEnvEditing({ ...envEditing, [key]: e.target.value }) + } + className={inputClasses} + /> +
+ {info?.set && ( + + )} + {info?.set && !isEditing && ( + + )} + {isEditing && ( + + )} +
+
+ ); + }); + })()} +
+ + {/* Save button for inline edits */} + {Object.values(envEditing).some(v => v.length > 0) && ( +
+ + +
+ )} + + {/* Add custom key */} +
+ {showAddKey ? ( +
+
+ + { setNewKeyName(e.target.value.toUpperCase()); setNewKeyError(null); }} + className={inputClasses} + autoFocus + /> +
+
+ + { setNewKeyValue(e.target.value); setNewKeyError(null); }} + className={inputClasses} + /> +
+ {newKeyError && ( +

{newKeyError}

+ )} +
+ + +
+
+ ) : ( + + )} +
+ + {envSaved && ( +
+ Setting saved. Restart Dobby to apply changes. +
+ )} + +

+ Dobby's default provider is Gemini. Changes require a restart to take effect. +

+ + )} +
+
+ ); +} diff --git a/src/components/settings/mcp-servers-section.tsx b/src/components/settings/mcp-servers-section.tsx new file mode 100644 index 0000000..2660932 --- /dev/null +++ b/src/components/settings/mcp-servers-section.tsx @@ -0,0 +1,512 @@ +"use client"; + +import { useEffect, useState } from "react"; +import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card"; +import { Button } from "@/components/ui/button"; +import { + Trash2, + Power, + PowerOff, + Calendar, + Mail, + FolderOpen, + Github, + Globe, + Database, + MessageSquare, + Search, + Terminal, + ChevronDown, + ChevronUp, + Check, +} from "lucide-react"; + +// --- Preset MCP Servers --- + +interface MCPPreset { + name: string; + description: string; + icon: React.ReactNode; + command: string; + args: string[]; + envKeys: { key: string; label: string; placeholder: string }[]; +} + +const MCP_PRESETS: MCPPreset[] = [ + { + name: "google-calendar", + description: "Read and manage Google Calendar events", + icon: , + command: "npx", + args: ["-y", "@cocal/google-calendar-mcp"], + envKeys: [ + { + key: "GOOGLE_OAUTH_CREDENTIALS", + label: "OAuth credentials JSON path", + placeholder: "~/.config/gcp-oauth.keys.json", + }, + ], + }, + { + name: "gmail", + description: "Read and send emails via Gmail", + icon: , + command: "npx", + args: ["-y", "@anthropic/gmail-mcp"], + envKeys: [], + }, + { + name: "google-drive", + description: "Access and search Google Drive files", + icon: , + command: "npx", + args: ["-y", "@anthropic/google-drive-mcp"], + envKeys: [], + }, + { + name: "github", + description: "Manage repos, issues, and pull requests", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-github"], + envKeys: [ + { + key: "GITHUB_PERSONAL_ACCESS_TOKEN", + label: "GitHub Token", + placeholder: "ghp_...", + }, + ], + }, + { + name: "slack", + description: "Read and send Slack messages", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-slack"], + envKeys: [ + { + key: "SLACK_BOT_TOKEN", + label: "Bot Token", + placeholder: "xoxb-...", + }, + ], + }, + { + name: "brave-search", + description: "Web search via Brave Search API", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-brave-search"], + envKeys: [ + { + key: "BRAVE_API_KEY", + label: "Brave API Key", + placeholder: "BSA...", + }, + ], + }, + { + name: "filesystem", + description: "Read and write local files", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-filesystem", "/Users"], + envKeys: [], + }, + { + name: "postgres", + description: "Query PostgreSQL databases", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-postgres"], + envKeys: [ + { + key: "POSTGRES_CONNECTION_STRING", + label: "Connection String", + placeholder: "postgresql://user:pass@host/db", + }, + ], + }, + { + name: "fetch", + description: "Fetch and read web page content", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-fetch"], + envKeys: [], + }, + { + name: "puppeteer", + description: "Browser automation and screenshots", + icon: , + command: "npx", + args: ["-y", "@modelcontextprotocol/server-puppeteer"], + envKeys: [], + }, +]; + +// --- Types --- + +interface MCPServer { + id: string; + name: string; + command: string; + args: string[]; + env: Record; + enabled: boolean; +} + +import { settingsInputClasses as inputClasses } from "@/components/shared/form-classes"; + +function presetButtonClass(installed: boolean, isConfiguring: boolean): string { + if (installed) return "cursor-default border-border/50 opacity-40"; + if (isConfiguring) return "border-accent/30 bg-accent-glow"; + return "border-border hover:border-border-hover hover:bg-surface-hover"; +} + +function presetIconClass(installed: boolean, isConfiguring: boolean): string { + if (installed) return "text-muted"; + if (isConfiguring) return "text-accent"; + return "text-muted-foreground"; +} + +export function MCPServersSection() { + const [servers, setServers] = useState([]); + const [showCustom, setShowCustom] = useState(false); + const [addingPreset, setAddingPreset] = useState(null); + const [presetEnvValues, setPresetEnvValues] = useState>({}); + const [newServer, setNewServer] = useState({ + name: "", + command: "", + args: "", + env: "", + }); + + useEffect(() => { + fetchServers(); + }, []); + + const fetchServers = async () => { + try { + const res = await fetch("/api/mcp/servers"); + if (res.ok) { + setServers(await res.json()); + } + } catch (error) { + console.error("Error fetching servers:", error); + } + }; + + const addFromPreset = async (preset: MCPPreset) => { + if (preset.envKeys.length > 0 && addingPreset?.name !== preset.name) { + setAddingPreset(preset); + setPresetEnvValues({}); + return; + } + + const env: Record = {}; + for (const ek of preset.envKeys) { + if (presetEnvValues[ek.key]) { + env[ek.key] = presetEnvValues[ek.key]; + } + } + + try { + const res = await fetch("/api/mcp/servers", { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ + name: preset.name, + command: preset.command, + args: preset.args, + env, + }), + }); + + if (res.ok) { + setAddingPreset(null); + setPresetEnvValues({}); + fetchServers(); + } + } catch (error) { + console.error("Error adding server:", error); + } + }; + + const addCustomServer = async () => { + if (!newServer.name || !newServer.command) return; + + try { + const res = await fetch("/api/mcp/servers", { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ + name: newServer.name, + command: newServer.command, + args: newServer.args + ? newServer.args.split(" ").filter(Boolean) + : [], + env: newServer.env ? JSON.parse(newServer.env) : {}, + }), + }); + + if (res.ok) { + setNewServer({ name: "", command: "", args: "", env: "" }); + setShowCustom(false); + fetchServers(); + } + } catch (error) { + console.error("Error adding server:", error); + } + }; + + const toggleServer = async (id: string, enabled: boolean) => { + try { + await fetch(`/api/mcp/servers/${id}`, { + method: "PATCH", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ enabled: !enabled }), + }); + fetchServers(); + } catch (error) { + console.error("Error toggling server:", error); + } + }; + + const deleteServer = async (id: string) => { + try { + await fetch(`/api/mcp/servers/${id}`, { method: "DELETE" }); + fetchServers(); + } catch (error) { + console.error("Error deleting server:", error); + } + }; + + const installedNames = new Set(servers.map((s) => s.name)); + + return ( + <> + {/* Active Servers */} + {servers.length > 0 && ( + + + Active Integrations + + + {servers.map((server) => ( +
+
+
+ + + {server.name} + +
+

+ {server.command} {(server.args || []).join(" ")} +

+
+
+ + +
+
+ ))} +
+
+ )} + + {/* Preset Library */} + + + Integration Library + + +
+ {MCP_PRESETS.map((preset, idx) => { + const installed = installedNames.has(preset.name); + const isConfiguring = addingPreset?.name === preset.name; + + return ( +
+ + + {/* Inline env config form */} + {isConfiguring && ( +
+ {preset.envKeys.map((ek) => ( +
+ + + setPresetEnvValues({ + ...presetEnvValues, + [ek.key]: e.target.value, + }) + } + className={inputClasses} + /> +
+ ))} +
+ + +
+
+ )} +
+ ); + })} +
+ + {/* Custom server toggle */} +
+ + + {showCustom && ( +
+ + setNewServer({ ...newServer, name: e.target.value }) + } + className={inputClasses} + /> + + setNewServer({ ...newServer, command: e.target.value }) + } + className={inputClasses} + /> + + setNewServer({ ...newServer, args: e.target.value }) + } + className={inputClasses} + /> + + setNewServer({ ...newServer, env: e.target.value }) + } + className={inputClasses} + /> +
+ + +
+
+ )} +
+
+
+ + ); +} diff --git a/src/components/settings/session-retention-section.tsx b/src/components/settings/session-retention-section.tsx new file mode 100644 index 0000000..74fd90a --- /dev/null +++ b/src/components/settings/session-retention-section.tsx @@ -0,0 +1,133 @@ +"use client"; + +import { useEffect, useState } from "react"; +import { Clock } from "lucide-react"; +import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card"; +import { Button } from "@/components/ui/button"; +import { settingsInputClasses as inputClasses } from "@/components/shared/form-classes"; + +export function SessionRetentionSection() { + const [sessionRetentionDays, setSessionRetentionDays] = useState(""); + const [sessionRetentionSaving, setSessionRetentionSaving] = useState(false); + const [sessionRetentionSaved, setSessionRetentionSaved] = useState(false); + + useEffect(() => { + fetchAppSettings(); + }, []); + + const fetchAppSettings = async () => { + try { + const res = await fetch("/api/settings"); + if (res.ok) { + const data = await res.json(); + if (data.session_retention_days) { + setSessionRetentionDays(data.session_retention_days); + } + } + } catch (error) { + console.error("Error fetching app settings:", error); + } + }; + + const saveSessionRetention = async () => { + setSessionRetentionSaving(true); + setSessionRetentionSaved(false); + try { + const res = await fetch("/api/settings", { + method: "PATCH", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ + session_retention_days: sessionRetentionDays || "", + }), + }); + if (res.ok) { + setSessionRetentionSaved(true); + setTimeout(() => setSessionRetentionSaved(false), 2000); + } + } catch (error) { + console.error("Error saving session retention:", error); + } + setSessionRetentionSaving(false); + }; + + return ( + + + + + Session Retention + + + +

+ Sessions older than the specified days will be automatically cleaned up. + Leave empty to preserve all sessions. +

+
+ +
+ { + const v = e.target.value; + if (v === "" || (parseInt(v, 10) >= 1 && !v.includes("."))) { + setSessionRetentionDays(v); + } + }} + className={`${inputClasses} max-w-[140px]`} + /> +
+ + {sessionRetentionDays && ( + + )} +
+
+ {sessionRetentionSaved && ( +

+ Session retention setting saved +

+ )} + {sessionRetentionDays && ( +

+ Sessions older than {sessionRetentionDays} day{sessionRetentionDays !== "1" ? "s" : ""} will be cleaned up +

+ )} +
+
+
+ ); +} diff --git a/src/components/shared/form-classes.ts b/src/components/shared/form-classes.ts new file mode 100644 index 0000000..3fadbd0 --- /dev/null +++ b/src/components/shared/form-classes.ts @@ -0,0 +1,7 @@ +export const inputClasses = + "w-full border border-border bg-background px-3 py-2 text-[15px] font-mono text-foreground placeholder:text-muted/40 outline-none transition-all focus:border-accent input-focus"; + +export const labelClasses = "block text-[12px] font-mono text-muted uppercase tracking-widest mb-1"; + +export const settingsInputClasses = + "w-full rounded-lg border border-border bg-background px-3 py-2 text-sm text-foreground outline-none transition-colors placeholder:text-muted focus:border-border-hover input-focus"; diff --git a/src/components/shared/source-icons.tsx b/src/components/shared/source-icons.tsx new file mode 100644 index 0000000..d8b5252 --- /dev/null +++ b/src/components/shared/source-icons.tsx @@ -0,0 +1,19 @@ +export function SlackIcon({ className }: { className?: string }) { + return ( + + + + + + + ); +} + +export function TelegramIcon({ className }: { className?: string }) { + return ( + + + + + ); +} diff --git a/src/lib/api/__tests__/utils.test.ts b/src/lib/api/__tests__/utils.test.ts index 0e85fdc..bd90db4 100644 --- a/src/lib/api/__tests__/utils.test.ts +++ b/src/lib/api/__tests__/utils.test.ts @@ -1,72 +1,29 @@ import { describe, test, expect } from "bun:test"; -import { withErrorHandler, jsonResponse, notFound, conflict, badRequest } from "../utils"; +import { z } from "zod"; +import { withErrorHandler, parseBody } from "../utils"; +import { NextResponse } from "next/server"; -describe("jsonResponse", () => { - test("returns Response with JSON data and default 200 status", async () => { - const res = jsonResponse({ message: "ok" }); - expect(res.status).toBe(200); - const body = await res.json(); - expect(body).toEqual({ message: "ok" }); - }); - - test("accepts custom status code", async () => { - const res = jsonResponse({ created: true }, 201); - expect(res.status).toBe(201); - }); -}); - -describe("notFound", () => { - test("returns 404 with default message", async () => { - const res = notFound(); - expect(res.status).toBe(404); - const body = await res.json(); - expect(body).toEqual({ error: "Not found" }); - }); - - test("returns 404 with custom message", async () => { - const res = notFound("Agent not found"); - expect(res.status).toBe(404); - const body = await res.json(); - expect(body).toEqual({ error: "Agent not found" }); - }); -}); +describe("parseBody", () => { + const schema = z.object({ name: z.string().min(1) }); -describe("conflict", () => { - test("returns 409 with default message", async () => { - const res = conflict(); - expect(res.status).toBe(409); - const body = await res.json(); - expect(body).toEqual({ error: "Already exists" }); + test("returns data on valid input", () => { + const result = parseBody({ name: "test" }, schema); + expect(result.data).toEqual({ name: "test" }); + expect(result.error).toBeUndefined(); }); - test("returns 409 with custom message", async () => { - const res = conflict("Name taken"); - expect(res.status).toBe(409); - const body = await res.json(); - expect(body).toEqual({ error: "Name taken" }); - }); -}); - -describe("badRequest", () => { - test("returns 400 with default message", async () => { - const res = badRequest(); - expect(res.status).toBe(400); - const body = await res.json(); - expect(body).toEqual({ error: "Bad request" }); - }); - - test("returns 400 with custom message", async () => { - const res = badRequest("Invalid input"); - expect(res.status).toBe(400); - const body = await res.json(); - expect(body).toEqual({ error: "Invalid input" }); + test("returns 400 error response on invalid input", async () => { + const result = parseBody({ name: "" }, schema); + expect(result.data).toBeUndefined(); + expect(result.error).toBeDefined(); + expect(result.error!.status).toBe(400); }); }); describe("withErrorHandler", () => { test("passes through successful responses", async () => { const handler = withErrorHandler(async () => { - return jsonResponse({ ok: true }); + return NextResponse.json({ ok: true }); }); const request = new Request("http://localhost/api/test"); diff --git a/src/lib/api/utils.ts b/src/lib/api/utils.ts index 11b2562..04a6efb 100644 --- a/src/lib/api/utils.ts +++ b/src/lib/api/utils.ts @@ -1,4 +1,5 @@ import { NextResponse } from "next/server"; +import type { ZodSchema, infer as ZodInfer } from "zod"; type RouteContext = { params: Promise> }; @@ -28,22 +29,22 @@ export function withErrorHandler(handler: RouteHandler): RouteHandler { }; } -/** Shorthand for JSON responses */ -export function jsonResponse(data: T, status = 200) { - return NextResponse.json(data, { status }); -} - -/** 404 response */ -export function notFound(message = "Not found") { - return NextResponse.json({ error: message }, { status: 404 }); -} - -/** 409 conflict response */ -export function conflict(message = "Already exists") { - return NextResponse.json({ error: message }, { status: 409 }); -} - -/** 400 bad request response */ -export function badRequest(message = "Bad request") { - return NextResponse.json({ error: message }, { status: 400 }); +/** + * Parse a request body against a Zod schema. + * Returns { data } on success, or { error: Response } on failure. + */ +export function parseBody( + body: unknown, + schema: T +): { data: ZodInfer; error?: never } | { data?: never; error: Response } { + const parsed = schema.safeParse(body); + if (!parsed.success) { + return { + error: NextResponse.json( + { error: parsed.error.issues[0]?.message || "Invalid input" }, + { status: 400 } + ), + }; + } + return { data: parsed.data }; } diff --git a/src/lib/claude/__tests__/build-timeline.test.ts b/src/lib/claude/__tests__/build-timeline.test.ts new file mode 100644 index 0000000..6983d71 --- /dev/null +++ b/src/lib/claude/__tests__/build-timeline.test.ts @@ -0,0 +1,207 @@ +import { describe, test, expect } from "bun:test"; +import type { ClaudeSessionEntry } from "../types"; +import { buildTimeline } from "../session-detail-reader"; + +// Helper to create minimal entries +function entry(overrides: Partial): ClaudeSessionEntry { + return { + type: "assistant", + sessionId: "test-session", + timestamp: "2026-01-01T00:00:00Z", + ...overrides, + }; +} + +describe("buildTimeline (default / parent mode)", () => { + test("returns empty array for empty entries", () => { + expect(buildTimeline([])).toEqual([]); + }); + + test("skips entries without timestamp", () => { + const result = buildTimeline([ + entry({ timestamp: undefined as unknown as string, type: "assistant", message: { role: "assistant", content: "hi" } }), + ]); + expect(result).toEqual([]); + }); + + test("captures external user text messages", () => { + const result = buildTimeline([ + entry({ type: "user", userType: "external", message: { role: "user", content: "Hello" } }), + ]); + expect(result).toHaveLength(1); + expect(result[0].kind).toBe("user"); + expect(result[0].text).toBe("Hello"); + }); + + test("skips internal user messages in default mode", () => { + const result = buildTimeline([ + entry({ type: "user", userType: "internal", message: { role: "user", content: "Internal prompt" } }), + ]); + // Internal user messages are treated as tool results (not user text) + // They should not produce a "user" kind entry + const userEntries = result.filter((e) => e.kind === "user"); + expect(userEntries).toHaveLength(0); + }); + + test("captures assistant text blocks", () => { + const result = buildTimeline([ + entry({ + type: "assistant", + message: { + role: "assistant", + content: [{ type: "text", text: "I will help you." }], + }, + }), + ]); + expect(result).toHaveLength(1); + expect(result[0].kind).toBe("assistant"); + expect(result[0].text).toBe("I will help you."); + }); + + test("captures tool_use blocks", () => { + const result = buildTimeline([ + entry({ + type: "assistant", + message: { + role: "assistant", + content: [{ type: "tool_use", name: "Bash", input: { command: "ls -la" } }], + }, + }), + ]); + expect(result).toHaveLength(1); + expect(result[0].kind).toBe("tool_use"); + expect(result[0].toolName).toBe("Bash"); + expect(result[0].text).toContain("ls -la"); + }); + + test("captures tool_result blocks from user entries", () => { + const result = buildTimeline([ + entry({ + type: "user", + message: { + role: "user", + content: [{ type: "tool_result", tool_use_id: "t1", content: "file.txt", is_error: false }], + }, + }), + ]); + expect(result).toHaveLength(1); + expect(result[0].kind).toBe("tool_result"); + expect(result[0].isError).toBe(false); + }); + + test("skips sidechain entries in default mode", () => { + const result = buildTimeline([ + entry({ + type: "assistant", + isSidechain: true, + message: { + role: "assistant", + content: [{ type: "text", text: "Sidechain work" }], + }, + }), + ]); + expect(result).toHaveLength(0); + }); + + test("deduplicates sub-agent launches", () => { + const result = buildTimeline([ + entry({ + type: "progress", + data: { type: "agent_progress", agentId: "agent-1", prompt: "Do task" }, + }), + entry({ + type: "progress", + data: { type: "agent_progress", agentId: "agent-1", prompt: "Do task again" }, + }), + ]); + expect(result).toHaveLength(1); + expect(result[0].kind).toBe("sub_agent"); + expect(result[0].agentId).toBe("agent-1"); + }); + + test("handles string content on assistant messages", () => { + const result = buildTimeline([ + entry({ type: "assistant", message: { role: "assistant", content: "Plain text" } }), + ]); + expect(result).toHaveLength(1); + expect(result[0].kind).toBe("assistant"); + expect(result[0].text).toBe("Plain text"); + }); + + test("includes token usage from message.usage", () => { + const result = buildTimeline([ + entry({ + type: "assistant", + message: { + role: "assistant", + content: [{ type: "text", text: "Response" }], + usage: { input_tokens: 100, output_tokens: 50, cache_read_input_tokens: 10, cache_creation_input_tokens: 5 }, + }, + }), + ]); + expect(result[0].tokenUsage).toEqual({ + inputTokens: 100, + outputTokens: 50, + cacheReadTokens: 10, + cacheCreationTokens: 5, + }); + }); +}); + +describe("buildTimeline (sub-agent mode: includeInternalMessages)", () => { + test("includes internal user messages", () => { + const result = buildTimeline( + [entry({ type: "user", userType: "internal", message: { role: "user", content: "Internal prompt" } })], + { includeInternalMessages: true } + ); + const userEntries = result.filter((e) => e.kind === "user"); + expect(userEntries).toHaveLength(1); + expect(userEntries[0].text).toBe("Internal prompt"); + }); + + test("does not skip sidechain entries", () => { + const result = buildTimeline( + [ + entry({ + type: "assistant", + isSidechain: true, + message: { role: "assistant", content: [{ type: "text", text: "Sub-agent work" }] }, + }), + ], + { includeInternalMessages: true } + ); + expect(result).toHaveLength(1); + expect(result[0].text).toBe("Sub-agent work"); + }); + + test("does not track sub-agent launches", () => { + const result = buildTimeline( + [ + entry({ + type: "progress", + data: { type: "agent_progress", agentId: "agent-1", prompt: "Do task" }, + }), + ], + { includeInternalMessages: true } + ); + expect(result).toHaveLength(0); + }); +}); + +describe("buildTimeline (mixed options)", () => { + test("default options match parent timeline behavior", () => { + const entries = [ + entry({ type: "user", userType: "external", message: { role: "user", content: "Hi" } }), + entry({ type: "user", userType: "internal", message: { role: "user", content: "Internal" } }), + entry({ type: "assistant", isSidechain: true, message: { role: "assistant", content: [{ type: "text", text: "Side" }] } }), + entry({ type: "progress", data: { type: "agent_progress", agentId: "a1", prompt: "p" } }), + entry({ type: "assistant", message: { role: "assistant", content: [{ type: "text", text: "Reply" }] } }), + ]; + + const result = buildTimeline(entries); + // Should have: 1 user, 1 sub_agent, 1 assistant = 3 entries + // Internal user message is not a "user" kind (treated as tool result container with no tool_results) + // Sidechain is filtered out + expect(result.map((e) => e.kind)).toEqual(["user", "sub_agent", "assistant"]); + }); +}); diff --git a/src/lib/claude/session-detail-reader.ts b/src/lib/claude/session-detail-reader.ts index 3b84164..c00f569 100644 --- a/src/lib/claude/session-detail-reader.ts +++ b/src/lib/claude/session-detail-reader.ts @@ -14,7 +14,7 @@ import { decodeProjectDir, parseJsonlEntries, } from "./utils"; -import { getSessionStatus, aggregateTokensFromEntries, extractMessageUsage } from "./session-utils"; +import { getSessionStatus, aggregateTokensFromEntries, extractMessageUsage, extractSessionMetadata, summarizeToolInput } from "./session-utils"; function getContentBlocks(entry: ClaudeSessionEntry) { const c = entry.message?.content; @@ -22,20 +22,16 @@ function getContentBlocks(entry: ClaudeSessionEntry) { return c; } -function summarizeToolInput(name: string, input?: Record): string { - if (!input) return name; - if ("command" in input) return `${name}: ${String(input.command).slice(0, 120)}`; - if ("file_path" in input) return `${name}: ${String(input.file_path)}`; - if ("query" in input) return `${name}: ${String(input.query).slice(0, 120)}`; - if ("pattern" in input) return `${name}: ${String(input.pattern).slice(0, 80)}`; - if ("prompt" in input) return `${name}: ${String(input.prompt).slice(0, 120)}`; - if ("description" in input) return `${name}: ${String(input.description).slice(0, 120)}`; - if ("url" in input) return `${name}: ${String(input.url).slice(0, 100)}`; - if ("old_string" in input) return `${name}: replacing in ${input.file_path ?? "file"}`; - return name; +interface BuildTimelineOptions { + /** Include internal user messages (for sub-agent timelines). Default: false */ + includeInternalMessages?: boolean; } -function buildTimeline(entries: ClaudeSessionEntry[]): TimelineEntry[] { +export function buildTimeline( + entries: ClaudeSessionEntry[], + options: BuildTimelineOptions = {} +): TimelineEntry[] { + const { includeInternalMessages = false } = options; const timeline: TimelineEntry[] = []; const seenSubAgents = new Set(); @@ -43,8 +39,8 @@ function buildTimeline(entries: ClaudeSessionEntry[]): TimelineEntry[] { const ts = entry.timestamp; if (!ts) continue; - // Sub-agent launch (deduplicate: only show first occurrence per agent) - if (entry.type === "progress" && entry.data?.type === "agent_progress" && entry.data.agentId) { + // Sub-agent launch tracking (only in parent timelines) + if (!includeInternalMessages && entry.type === "progress" && entry.data?.type === "agent_progress" && entry.data.agentId) { if (!seenSubAgents.has(entry.data.agentId)) { seenSubAgents.add(entry.data.agentId); timeline.push({ @@ -61,8 +57,8 @@ function buildTimeline(entries: ClaudeSessionEntry[]): TimelineEntry[] { if (entry.type !== "user" && entry.type !== "assistant") continue; if (!entry.message) continue; - // Skip sidechain entries (sub-agent work) in the parent timeline - if (entry.isSidechain) continue; + // Skip sidechain entries in parent timelines (sub-agent timelines include everything) + if (!includeInternalMessages && entry.isSidechain) continue; const blocks = getContentBlocks(entry); const stringContent = typeof entry.message.content === "string" ? entry.message.content : null; @@ -70,12 +66,16 @@ function buildTimeline(entries: ClaudeSessionEntry[]): TimelineEntry[] { const usage = extractMessageUsage(entry); // User message with text content - if (entry.type === "user" && entry.userType === "external") { + const isUserText = includeInternalMessages + ? (entry.userType === "external" || entry.userType === "internal") + : entry.userType === "external"; + + if (entry.type === "user" && isUserText) { const text = stringContent || blocks.find((b) => b.type === "text")?.text; if (text) { timeline.push({ timestamp: ts, kind: "user", text: text.slice(0, 500) }); + continue; } - continue; } // Tool results (user messages that contain tool_result blocks) @@ -256,19 +256,7 @@ export async function readSessionDetail( const entries = parseJsonlEntries(content); if (entries.length === 0) return null; - // Extract metadata - let slug: string | null = null; - let model: string | null = null; - let gitBranch: string | null = null; - let cwd: string | null = null; - - for (const e of entries) { - if (!slug && e.slug) slug = e.slug; - if (!model && e.message?.model) model = e.message.model; - if (!gitBranch && e.gitBranch) gitBranch = e.gitBranch; - if (!cwd && e.cwd) cwd = e.cwd; - if (slug && model && gitBranch && cwd) break; - } + const { slug, model, gitBranch, cwd } = extractSessionMetadata(entries); const totalTokens = aggregateTokensFromEntries(entries); const firstEntry = entries[0]; @@ -328,18 +316,7 @@ export async function readSubAgentDetail( const entries = parseJsonlEntries(content); if (entries.length === 0) return null; - let slug: string | null = null; - let model: string | null = null; - let gitBranch: string | null = null; - let cwd: string | null = null; - - for (const e of entries) { - if (!slug && e.slug) slug = e.slug; - if (!model && e.message?.model) model = e.message.model; - if (!gitBranch && e.gitBranch) gitBranch = e.gitBranch; - if (!cwd && e.cwd) cwd = e.cwd; - if (slug && model && gitBranch && cwd) break; - } + const { slug, model, gitBranch, cwd } = extractSessionMetadata(entries); const totalTokens = aggregateTokensFromEntries(entries); const firstEntry = entries[0]; @@ -347,7 +324,7 @@ export async function readSubAgentDetail( const projectPath = cwd || decodeProjectDir(projectDir); const status = getSessionStatus(fileStat.mtimeMs, null)!; const fallbackTime = new Date(fileStat.mtimeMs).toISOString(); - const timeline = buildSubAgentTimeline(entries); + const timeline = buildTimeline(entries, { includeInternalMessages: true }); return { session: { @@ -368,80 +345,3 @@ export async function readSubAgentDetail( }; } -// Sub-agent timeline doesn't skip sidechains (everything is its own work) -function buildSubAgentTimeline(entries: ClaudeSessionEntry[]): TimelineEntry[] { - const timeline: TimelineEntry[] = []; - - for (const entry of entries) { - const ts = entry.timestamp; - if (!ts) continue; - - if (entry.type !== "user" && entry.type !== "assistant") continue; - if (!entry.message) continue; - - const blocks = getContentBlocks(entry); - const stringContent = typeof entry.message.content === "string" ? entry.message.content : null; - - const usage = extractMessageUsage(entry); - - // User text message (first message is the prompt) - if (entry.type === "user" && (entry.userType === "external" || entry.userType === "internal")) { - const text = stringContent || blocks.find((b) => b.type === "text")?.text; - if (text) { - timeline.push({ timestamp: ts, kind: "user", text: text.slice(0, 500) }); - continue; - } - } - - // Tool results - if (entry.type === "user") { - for (const block of blocks) { - if (block.type === "tool_result") { - const resultText = typeof block.content === "string" ? block.content : ""; - timeline.push({ - timestamp: ts, - kind: "tool_result", - text: resultText.slice(0, 300), - isError: block.is_error ?? false, - }); - } - } - continue; - } - - // Assistant messages - if (entry.type === "assistant") { - for (const block of blocks) { - if (block.type === "tool_use" && block.name) { - timeline.push({ - timestamp: ts, - kind: "tool_use", - text: summarizeToolInput(block.name, block.input), - toolName: block.name, - tokenUsage: usage, - }); - } else if (block.type === "text" && block.text) { - const trimmed = block.text.trim(); - if (trimmed.length > 0) { - timeline.push({ - timestamp: ts, - kind: "assistant", - text: trimmed.slice(0, 500), - tokenUsage: usage, - }); - } - } - } - if (blocks.length === 0 && stringContent) { - timeline.push({ - timestamp: ts, - kind: "assistant", - text: stringContent.slice(0, 500), - tokenUsage: usage, - }); - } - } - } - - return timeline; -} diff --git a/src/lib/claude/session-reader.ts b/src/lib/claude/session-reader.ts index 30aa709..6617581 100644 --- a/src/lib/claude/session-reader.ts +++ b/src/lib/claude/session-reader.ts @@ -7,7 +7,7 @@ import type { } from "./types"; import { CLAUDE_DIR, PROJECTS_DIR } from "./constants"; import { shortenModel, extractProjectName, decodeProjectDir } from "./utils"; -import { getSessionStatus } from "./session-utils"; +import { getSessionStatus, extractSessionMetadata, summarizeToolInput } from "./session-utils"; const TAIL_BYTES = 16_384; @@ -49,21 +49,7 @@ function extractLastAction(entries: ClaudeSessionEntry[]): { for (const block of entry.message.content) { if (block.type === "tool_use" && block.name) { - let description = block.name; - if (block.input) { - if ("command" in block.input) { - description = `${block.name}: ${String(block.input.command).slice(0, 80)}`; - } else if ("file_path" in block.input) { - description = `${block.name}: ${String(block.input.file_path)}`; - } else if ("query" in block.input) { - description = `${block.name}: ${String(block.input.query).slice(0, 80)}`; - } else if ("pattern" in block.input) { - description = `${block.name}: ${String(block.input.pattern)}`; - } else if ("description" in block.input) { - description = String(block.input.description).slice(0, 100); - } - } - return { lastAction: description, lastToolName: block.name }; + return { lastAction: summarizeToolInput(block.name, block.input), lastToolName: block.name }; } if (block.type === "text" && block.text && block.text.length > 10) { return { @@ -82,25 +68,22 @@ function extractLastAction(entries: ClaudeSessionEntry[]): { } async function aggregateTokensFromFile(filePath: string) { - let inputTokens = 0; - let outputTokens = 0; - let cacheReadTokens = 0; - let cacheCreationTokens = 0; + const content = await fs.readFile(filePath, "utf-8"); + + // Fast-path: use string checks to skip JSON parsing for lines without usage data + let inputTokens = 0, outputTokens = 0, cacheReadTokens = 0, cacheCreationTokens = 0; let messageCount = 0; - const content = await fs.readFile(filePath, "utf-8"); for (const line of content.split("\n")) { if (!line) continue; - // Fast check: skip lines without usage data if (!line.includes('"usage"')) { - // Still count messages if (line.includes('"type":"user"') || line.includes('"type":"assistant"')) { messageCount++; } continue; } try { - const entry = JSON.parse(line); + const entry = JSON.parse(line) as ClaudeSessionEntry; if (entry.message?.usage) { const u = entry.message.usage; inputTokens += u.input_tokens || 0; @@ -108,37 +91,16 @@ async function aggregateTokensFromFile(filePath: string) { cacheReadTokens += u.cache_read_input_tokens || 0; cacheCreationTokens += u.cache_creation_input_tokens || 0; } - if (entry.type === "user" || entry.type === "assistant") { - messageCount++; - } - } catch { - // skip malformed lines - } + if (entry.type === "user" || entry.type === "assistant") messageCount++; + } catch { /* skip malformed */ } } + return { tokenUsage: { inputTokens, outputTokens, cacheReadTokens, cacheCreationTokens }, messageCount, }; } -function extractMetadata(entries: ClaudeSessionEntry[]) { - let slug: string | null = null; - let model: string | null = null; - let gitBranch: string | null = null; - let cwd: string | null = null; - - // Walk backwards to get most recent values - for (let i = entries.length - 1; i >= 0; i--) { - const e = entries[i]; - if (!slug && e.slug) slug = e.slug; - if (!model && e.message?.model) model = e.message.model; - if (!gitBranch && e.gitBranch) gitBranch = e.gitBranch; - if (!cwd && e.cwd) cwd = e.cwd; - if (slug && model && gitBranch && cwd) break; - } - return { slug, model, gitBranch, cwd }; -} - async function readStatsCache(): Promise<{ totalTokensToday: number; totalSessionsToday: number; @@ -229,7 +191,8 @@ export async function scanSessions(): Promise { if (entries.length === 0) return null; const sessionId = file.replace(".jsonl", ""); - const meta = extractMetadata(entries); + // Walk backwards to prefer most-recent metadata values from the tail window + const meta = extractSessionMetadata([...entries].reverse()); const fallbackPath = decodeProjectDir(projDir); const cwdPath = meta.cwd || fallbackPath; const { projectName, workspaceName } = extractProjectName(cwdPath); diff --git a/src/lib/claude/session-utils.ts b/src/lib/claude/session-utils.ts index ca4f22e..d456a12 100644 --- a/src/lib/claude/session-utils.ts +++ b/src/lib/claude/session-utils.ts @@ -43,6 +43,43 @@ export function aggregateTokensFromEntries(entries: ClaudeSessionEntry[]): Token return { inputTokens, outputTokens, cacheReadTokens, cacheCreationTokens }; } +/** Extract first-found session metadata (slug, model, gitBranch, cwd) from entries. */ +export function extractSessionMetadata(entries: ClaudeSessionEntry[]): { + slug: string | null; + model: string | null; + gitBranch: string | null; + cwd: string | null; +} { + let slug: string | null = null; + let model: string | null = null; + let gitBranch: string | null = null; + let cwd: string | null = null; + + for (const e of entries) { + if (!slug && e.slug) slug = e.slug; + if (!model && e.message?.model) model = e.message.model; + if (!gitBranch && e.gitBranch) gitBranch = e.gitBranch; + if (!cwd && e.cwd) cwd = e.cwd; + if (slug && model && gitBranch && cwd) break; + } + + return { slug, model, gitBranch, cwd }; +} + +/** Summarize a tool_use block's input for display in timelines and session lists. */ +export function summarizeToolInput(name: string, input?: Record): string { + if (!input) return name; + if ("command" in input) return `${name}: ${String(input.command).slice(0, 120)}`; + if ("file_path" in input) return `${name}: ${String(input.file_path)}`; + if ("query" in input) return `${name}: ${String(input.query).slice(0, 120)}`; + if ("pattern" in input) return `${name}: ${String(input.pattern).slice(0, 80)}`; + if ("prompt" in input) return `${name}: ${String(input.prompt).slice(0, 120)}`; + if ("description" in input) return `${name}: ${String(input.description).slice(0, 120)}`; + if ("url" in input) return `${name}: ${String(input.url).slice(0, 100)}`; + if ("old_string" in input) return `${name}: replacing in ${input.file_path ?? "file"}`; + return name; +} + /** Extract per-message token usage for timeline entries */ export function extractMessageUsage( entry: ClaudeSessionEntry diff --git a/src/lib/db/notification-config.ts b/src/lib/db/notification-config.ts new file mode 100644 index 0000000..9ae9a12 --- /dev/null +++ b/src/lib/db/notification-config.ts @@ -0,0 +1,52 @@ +import { db } from "@/lib/db"; +import { notificationConfigs } from "@/lib/db/schema"; +import { eq } from "drizzle-orm"; + +/** + * Upsert a notification config by channel key. + */ +export function upsertNotificationConfig( + channel: string, + config: Record, + enabled = true +): string { + const existing = db + .select() + .from(notificationConfigs) + .where(eq(notificationConfigs.channel, channel)) + .get(); + + if (existing) { + db.update(notificationConfigs) + .set({ enabled, config, updatedAt: new Date() }) + .where(eq(notificationConfigs.id, existing.id)) + .run(); + return existing.id; + } else { + const rows = db.insert(notificationConfigs) + .values({ channel, enabled, config }) + .returning({ id: notificationConfigs.id }) + .all(); + return rows[0].id; + } +} + +/** + * Get a notification config by channel key. Returns null if not found. + */ +export function getNotificationConfig(channel: string) { + return db + .select() + .from(notificationConfigs) + .where(eq(notificationConfigs.channel, channel)) + .get() ?? null; +} + +/** + * Delete a notification config by channel key. + */ +export function deleteNotificationConfig(channel: string): void { + db.delete(notificationConfigs) + .where(eq(notificationConfigs.channel, channel)) + .run(); +} diff --git a/src/lib/issues/git-worktree.ts b/src/lib/issues/git-worktree.ts new file mode 100644 index 0000000..1b834c0 --- /dev/null +++ b/src/lib/issues/git-worktree.ts @@ -0,0 +1,38 @@ +import { execFileSync } from "node:child_process"; + +/** + * Remove a git worktree and prune stale entries. Best-effort — silently + * swallows errors (worktree may already be gone from disk). + */ +export function removeWorktree(worktreePath: string, repoPath: string): void { + try { + execFileSync("git", ["worktree", "remove", worktreePath, "--force"], { + cwd: repoPath, + stdio: "ignore", + }); + execFileSync("git", ["worktree", "prune"], { cwd: repoPath, stdio: "ignore" }); + } catch { + // Worktree may already be gone from disk + } +} + +/** Force-remove a worktree without pruning. Use with pruneWorktrees for batch operations. */ +export function forceRemoveWorktree(worktreePath: string, repoPath: string): void { + try { + execFileSync("git", ["worktree", "remove", worktreePath, "--force"], { + cwd: repoPath, + stdio: "ignore", + }); + } catch { + // Worktree may already be gone from disk + } +} + +/** Prune stale worktree entries. Best-effort. */ +export function pruneWorktrees(repoPath: string): void { + try { + execFileSync("git", ["worktree", "prune"], { cwd: repoPath, stdio: "ignore" }); + } catch { + // Best-effort + } +} diff --git a/src/lib/issues/pipeline.ts b/src/lib/issues/pipeline.ts index 7bdcea0..07b5aba 100644 --- a/src/lib/issues/pipeline.ts +++ b/src/lib/issues/pipeline.ts @@ -1,1536 +1,2 @@ -import { spawn } from "node:child_process"; -import { existsSync, mkdirSync } from "node:fs"; -import { join } from "node:path"; -import { execFileSync } from "node:child_process"; -import { db } from "@/lib/db"; -import { issues, issueMessages, repositories } from "@/lib/db/schema"; -import { getIssueAttachments } from "./attachments"; -import { eq, and, gt } from "drizzle-orm"; -import { resolveClaudePath } from "@/lib/utils/resolve-claude-path"; -import { getSetting, setSetting } from "@/lib/db/app-settings"; -import { sendTelegramMessageWithId, escapeHtml, TELEGRAM_SAFE_MSG_LEN } from "@/lib/notifications/telegram"; -import { sendSlackMessage, SLACK_SAFE_MSG_LEN } from "@/lib/notifications/slack"; -import type { PipelinePhaseResult, IssueStatus, IssuesTransportConfig } from "./types"; -import { - PHASE_STATUS_MAP, MAX_PLAN_ITERATIONS, MAX_CODE_REVIEW_ITERATIONS, - PHASE_TIMEOUT_MS, IMPL_TIMEOUT_MS, QA_TIMEOUT_MS, -} from "./types"; - -const MAX_FALLBACK_CHARS = 50_000; - -function telegramMarkupToSlackText(text: string): string { - return text - .replace(/<\/?(?:b|i|code|pre)>/g, "") - .replace(/</g, "<") - .replace(/>/g, ">") - .replace(/&/g, "&") - .trim(); -} - -/** Files that should never be auto-committed. Tested against full path from git status. */ -const SENSITIVE_FILE_PATTERN = - /\.(env|pem|key|p12|pfx|jks|keystore)(\..*)?$|\.npmrc$|\.pypirc$|id_(rsa|ed25519|ecdsa|dsa)$|credentials\.json$/i; - -/** Allowed env var prefixes/names for Claude CLI child processes. */ -const ALLOWED_ENV_KEYS = new Set([ - "PATH", "HOME", "USER", "SHELL", "TERM", "LANG", "TMPDIR", "XDG_CONFIG_HOME", - "ANTHROPIC_API_KEY", "CLAUDE_API_KEY", "CLAUDE_CODE_API_KEY", - "GH_TOKEN", "GITHUB_TOKEN", -]); - -/** Build a minimal env for Claude CLI — only pass through what's needed. */ -function buildClaudeEnv(): NodeJS.ProcessEnv { - const env: Record = {}; - for (const key of ALLOWED_ENV_KEYS) { - if (process.env[key]) env[key] = process.env[key]!; - } - return env as unknown as NodeJS.ProcessEnv; -} - -/** Build the default worktree directory path under `.claude/worktrees/`. */ -export function buildWorktreePath(repoPath: string, slug: string, shortId: string): string { - return join(repoPath, ".claude", "worktrees", `${slug}-${shortId}`); -} - -// ── Resume capability check (appSettings-cached, globalThis for HMR) ── - -const _g = globalThis as unknown as { _resumeCheckPromise?: Promise; _resumeCheckAt?: number }; -const RESUME_CHECK_IN_MEMORY_TTL = 60 * 60 * 1000; // 1 hour — re-check DB after this - -async function isResumeSupported(): Promise { - // Clear stale in-memory cache so DB TTL takes effect for long-running processes - if (_g._resumeCheckPromise && _g._resumeCheckAt && Date.now() - _g._resumeCheckAt > RESUME_CHECK_IN_MEMORY_TTL) { - _g._resumeCheckPromise = undefined; - } - if (!_g._resumeCheckPromise) { - _g._resumeCheckAt = Date.now(); - _g._resumeCheckPromise = doResumeCheck().catch((err) => { - console.error("[pipeline] Resume check failed, will retry:", err); - _g._resumeCheckPromise = undefined; - return false; - }); - } - return _g._resumeCheckPromise; -} - -async function doResumeCheck(): Promise { - // Check DB cache first (survives process restarts) - const cached = getSetting("claude-resume-supported"); - const checkedAt = getSetting("claude-resume-checked-at"); - - if (cached !== null && checkedAt) { - const supported = cached === "true"; - const age = Date.now() - new Date(checkedAt).getTime(); - // Cache true for 7 days; cache false for only 1 hour (self-heals after transient failures) - const ttl = supported ? 7 * 24 * 60 * 60 * 1000 : 60 * 60 * 1000; - if (age < ttl) { - console.log(`[pipeline] Resume capability cached: ${supported}`); - return supported; - } - } - - console.log("[pipeline] Checking --resume capability..."); - - // Run verification: create a session, then resume it - const testId = crypto.randomUUID(); - const create = await runClaudePhase({ - workdir: "/tmp", - prompt: "Reply with exactly: VERIFY_OK", - timeoutMs: 30_000, - sessionId: testId, - }); - if (!create.success || !create.output.includes("VERIFY_OK")) { - console.log("[pipeline] Resume check: create phase failed, marking unsupported"); - cacheResumeResult(false); - return false; - } - - const resume = await runClaudePhase({ - workdir: "/tmp", - prompt: "Reply with exactly: RESUME_OK", - timeoutMs: 30_000, - resumeSessionId: testId, - }); - const supported = resume.success && resume.output.includes("RESUME_OK"); - console.log(`[pipeline] Resume capability: ${supported}`); - cacheResumeResult(supported); - return supported; -} - -function cacheResumeResult(supported: boolean) { - setSetting("claude-resume-supported", String(supported)); - setSetting("claude-resume-checked-at", new Date().toISOString()); -} - -/** - * Run a single Claude CLI phase. - * Prompt is piped via stdin. Uses --session-id or --resume. - * Parses stream-json output for result text. - */ -async function runClaudePhase(opts: { - workdir: string; - prompt: string; - systemPrompt?: string; - timeoutMs?: number; - sessionId?: string; - resumeSessionId?: string; -}): Promise { - // Compute once, use everywhere — no double-UUID risk - const effectiveSessionId = opts.resumeSessionId || opts.sessionId || crypto.randomUUID(); - - const args = [ - "-p", - "--verbose", - "--output-format", "stream-json", - "--dangerously-skip-permissions", - ]; - - if (opts.resumeSessionId) { - args.push("--resume", opts.resumeSessionId); - } else { - args.push("--session-id", effectiveSessionId); - } - - // System prompt only on creation (resumed sessions inherit it) - if (opts.systemPrompt && !opts.resumeSessionId) { - args.push("--append-system-prompt", opts.systemPrompt); - } - - const timeout = opts.timeoutMs || PHASE_TIMEOUT_MS; - - return new Promise((resolve) => { - const proc = spawn(resolveClaudePath(), args, { - cwd: opts.workdir, - env: buildClaudeEnv(), - }); - - proc.stdin!.write(opts.prompt); - proc.stdin!.end(); - - let buffer = ""; - let resultText = ""; - const assistantBlocks: string[] = []; - let assistantBlocksSize = 0; - let timedOut = false; - - const timer = setTimeout(() => { - timedOut = true; - proc.kill("SIGTERM"); - // Force kill if SIGTERM is ignored after 30s - setTimeout(() => { try { proc.kill("SIGKILL"); } catch { /* already dead */ } }, 30000); - }, timeout); - - proc.stdout!.on("data", (chunk: Buffer) => { - buffer += chunk.toString(); - // Cap buffer to prevent OOM from very long lines without newlines - if (buffer.length > 1_000_000) { - buffer = buffer.slice(-500_000); - } - const lines = buffer.split("\n"); - buffer = lines.pop() || ""; - - for (const line of lines) { - const trimmed = line.trim(); - if (!trimmed) continue; - try { - const event = JSON.parse(trimmed); - if (event.type === "result" && event.result) { - resultText = event.result; - } - if (event.type === "assistant" && event.message?.content) { - for (const block of event.message.content) { - if (block.type === "text" && block.text && assistantBlocksSize < MAX_FALLBACK_CHARS) { - assistantBlocks.push(block.text); - assistantBlocksSize += block.text.length; - } - } - } - } catch { /* skip non-JSON lines */ } - } - }); - - let stderrOutput = ""; - proc.stderr!.on("data", (chunk: Buffer) => { - stderrOutput += chunk.toString(); - if (stderrOutput.length > 10000) stderrOutput = stderrOutput.slice(-10000); - }); - - proc.on("close", (code) => { - clearTimeout(timer); - - // Process remaining buffer - if (buffer.trim()) { - try { - const event = JSON.parse(buffer.trim()); - if (event.type === "result" && event.result) resultText = event.result; - if (event.type === "assistant" && event.message?.content) { - for (const block of event.message.content) { - if (block.type === "text" && block.text && assistantBlocksSize < MAX_FALLBACK_CHARS) { - assistantBlocks.push(block.text); - assistantBlocksSize += block.text.length; - } - } - } - } catch { /* ignore */ } - } - - // Cap resultText to prevent unbounded DB writes - if (resultText.length > MAX_FALLBACK_CHARS) { - resultText = resultText.substring(0, MAX_FALLBACK_CHARS); - } - - let output = resultText.trim() || assistantBlocks.join("\n\n"); - if (timedOut) output = `[TIMEOUT after ${timeout / 1000}s] ${output}`; - if (!output && stderrOutput) output = stderrOutput; - - const hasQuestions = /##\s*Questions/i.test(output); - const questions = hasQuestions - ? output.substring(output.search(/##\s*Questions/i)) - : undefined; - - resolve({ - success: code === 0 && !timedOut, - output, - sessionId: effectiveSessionId, - hasQuestions, - questions, - timedOut, - }); - }); - - proc.on("error", (err) => { - clearTimeout(timer); - resolve({ success: false, output: err.message, sessionId: effectiveSessionId }); - }); - }); -} - -// ── Helper functions ────────────────────────────────────────── - -async function updatePhase(issueId: string, phase: number, status: IssueStatus) { - await db.update(issues).set({ - currentPhase: phase, - status, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); -} - -async function failIssue(issueId: string, error: string) { - await db.update(issues).set({ - status: "failed", - error: error.substring(0, 10000), - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); -} - -/** Check if the issue has been cancelled (status set to "failed" externally). */ -async function isCancelled(issueId: string): Promise { - const [issue] = await db.select({ status: issues.status }).from(issues).where(eq(issues.id, issueId)); - return issue?.status === "failed"; -} - -async function sendIssueTransportMessage( - issueId: string, - config: IssuesTransportConfig, - text: string -): Promise<{ messageId?: number; slackTs?: string }> { - if (config.kind === "telegram") { - const truncated = text.length > 4096 ? text.substring(0, 4093) + "..." : text; - const messageId = await sendTelegramMessageWithId(config, truncated); - return { messageId }; - } - - const [issue] = await db.select({ - slackChannelId: issues.slackChannelId, - slackThreadTs: issues.slackThreadTs, - }).from(issues).where(eq(issues.id, issueId)).limit(1); - - if (!issue?.slackChannelId || !issue.slackThreadTs) { - throw new Error("Slack issue thread metadata missing"); - } - - const result = await sendSlackMessage( - { botToken: config.botToken }, - issue.slackChannelId, - telegramMarkupToSlackText(text).substring(0, SLACK_SAFE_MSG_LEN), - issue.slackThreadTs - ); - - return { slackTs: result.ts }; -} - -async function notify(issueId: string, config: IssuesTransportConfig, text: string) { - try { - await sendIssueTransportMessage(issueId, config, text); - } catch (err) { - console.error("Failed to send issue notification:", err); - } -} - -async function handleQuestions( - issueId: string, - questions: string, - config: IssuesTransportConfig -): Promise { - const truncatedQ = config.kind === "telegram" && questions.length > TELEGRAM_SAFE_MSG_LEN - ? questions.substring(0, TELEGRAM_SAFE_MSG_LEN) + "..." - : questions; - - // Capture time BEFORE sending so we don't miss fast replies - const questionTime = new Date(); - - if (config.kind === "telegram") { - const msgId = await sendTelegramMessageWithId(config, - `Questions for issue ${issueId.substring(0, 8)}:\n\n${escapeHtml(truncatedQ)}\n\nReply to this message to answer.` - ); - - await db.insert(issueMessages).values({ - issueId, - direction: "from_claude", - message: questions, - telegramMessageId: msgId, - }); - } else { - const result = await sendIssueTransportMessage( - issueId, - config, - `Questions for issue ${issueId.substring(0, 8)}:\n\n${truncatedQ}\n\nReply in this Slack thread to answer.` - ); - - await db.insert(issueMessages).values({ - issueId, - direction: "from_claude", - message: questions, - slackMessageTs: result.slackTs, - }); - } - - await db.update(issues).set({ status: "waiting_for_input", updatedAt: new Date() }).where(eq(issues.id, issueId)); - - // Wait for reply (polling for user replies newer than the question) - const startWait = Date.now(); - while (Date.now() - startWait < QA_TIMEOUT_MS) { - // Check cancellation - if (await isCancelled(issueId)) return false; - - const [userReply] = await db.select().from(issueMessages) - .where(and( - eq(issueMessages.issueId, issueId), - eq(issueMessages.direction, "from_user"), - gt(issueMessages.createdAt, questionTime) - )) - .limit(1); - - if (userReply) return true; - await new Promise(r => setTimeout(r, 5000)); - } - - return false; -} - -async function getUserAnswers(issueId: string): Promise { - const messages = await db.select().from(issueMessages) - .where(and(eq(issueMessages.issueId, issueId), eq(issueMessages.direction, "from_user"))) - .orderBy(issueMessages.createdAt); - - if (messages.length === 0) return null; - return messages.map(m => m.message).join("\n\n"); -} - -/** Extract a PipelinePhaseResult from a settled promise, returning a failure result on rejection. */ -function settledResult(r: PromiseSettledResult): PipelinePhaseResult { - if (r.status === "fulfilled") return r.value; - return { success: false, output: `Agent failed: ${String(r.reason)}` }; -} - -// ── Planning session helpers ───────────────────────────────── - -/** Create a fresh planning session with a new UUID. Updates planningSessionId in DB. */ -async function createFreshPlanningSession( - workdir: string, - prompt: string, - issueId: string, -): Promise<{ result: PipelinePhaseResult; sessionId: string }> { - const sessionId = crypto.randomUUID(); - const result = await runClaudePhase({ - workdir, - prompt, - systemPrompt: "You are an expert implementation planner. Create detailed, actionable plans.", - timeoutMs: PHASE_TIMEOUT_MS, - sessionId, - }); - await db.update(issues).set({ planningSessionId: sessionId, updatedAt: new Date() }) - .where(eq(issues.id, issueId)); - return { result, sessionId }; -} - -/** Build a prompt for resumed planning sessions (only new context, no duplicate planning prompt). */ -function buildResumePlanningPrompt( - reviewFeedback: string | null | undefined, - completenessReview: string | null | undefined, - userAnswers: string | null, - attachmentPaths: string[] = [], -): string { - const attachmentReminder = attachmentPaths.length > 0 - ? `\n\n## Attached Images (still available)\nUse the Read tool to view these images for visual context:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}\n` - : ""; - - if (reviewFeedback) { - return `Your previous plan was reviewed and found to have issues. Create a REVISED plan addressing all feedback below. - -## Review Feedback -${reviewFeedback} -${completenessReview ? `\n## Completeness Review Feedback\n${completenessReview}` : ""} -${userAnswers ? `\n## User's Answers to Your Questions\n${userAnswers}` : ""} -${attachmentReminder} -Revise your implementation plan to address all the review feedback. Include the "## Codebase Analysis" section again. -End with "VERDICT: READY" or "## Questions" if you need more information.`; - } - if (userAnswers) { - return `Here are the answers to your questions: - -${userAnswers} -${attachmentReminder} -Please update your implementation plan based on these answers. Include the "## Codebase Analysis" section. -End with "VERDICT: READY" or "## Questions" if you need more information.`; - } - // Resuming after crash with no new context — ask to continue - return `Continue your implementation plan where you left off. Include the "## Codebase Analysis" section. -${attachmentReminder} -End with "VERDICT: READY" or "## Questions" if you need more information.`; -} - -/** Build a full planning prompt with all available context (for fresh sessions). */ -function buildFullPlanningPrompt( - description: string, - planOutput: string, - reviewFeedback: string | null | undefined, - completenessReview: string | null | undefined, - userAnswers: string | null, - attachmentPaths: string[] = [], -): string { - let prompt = buildPlanningPrompt(description, attachmentPaths); - if (planOutput && reviewFeedback) { - prompt += `\n\n## Previous Plan Review Feedback\n${reviewFeedback}`; - } - if (planOutput && completenessReview) { - prompt += `\n\n## Completeness Review Feedback\n${completenessReview}`; - } - if (userAnswers) { - prompt += `\n\n## User's Answers to Questions\n${userAnswers}`; - } - return prompt; -} - -// ── Prompt builders ────────────────────────────────────────── - -function buildPlanningPrompt(description: string, attachmentPaths: string[] = []): string { - const attachmentSection = attachmentPaths.length > 0 - ? `\n\n## Attached Images\nThe following images were provided with this issue. Use the Read tool to view them for visual context:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}` - : ""; - - return `You are tasked with creating a detailed implementation plan for the following issue. - -## Issue Description -${description} -${attachmentSection} - -## Instructions -1. Analyze the codebase to understand the existing architecture and patterns -2. Create a step-by-step implementation plan -3. Identify files that need to be created or modified -4. Note any potential risks or edge cases -5. If you have questions that would significantly affect the plan, add a "## Questions" section at the end - -## Output Format -Provide a structured plan with: -- Overview of the approach -- Detailed steps with file paths -- Any new dependencies needed -- Testing strategy - -**Important**: Include a "## Codebase Analysis" section with: -- Key file paths you examined and their purposes -- Relevant code patterns and conventions observed -- Critical code snippets that the implementer must reference -- Architecture notes (how components connect) - -This analysis will be used by the implementation phase, so be thorough. - -End with either: -- "VERDICT: READY" if the plan is complete -- "## Questions" section if you need clarification`; -} - -function buildAdversarialReviewPrompt(plan: string, priorFindings?: string): string { - const priorSection = priorFindings ? ` -## Prior Review Findings (from previous rounds) -The following CRITICAL issues were found in earlier review rounds. You MUST verify that EACH of these has been addressed in the current plan. If any remain unaddressed, re-list them as CRITICAL. - -${priorFindings} - -` : ""; - - return `You are an adversarial plan reviewer. Your job is to find problems, not validate. - -## Plan to Review -${plan} -${priorSection} -## Instructions -Review this plan for: -1. Security vulnerabilities -2. Missing error handling -3. Race conditions or concurrency issues -4. Incorrect assumptions about the codebase -5. Missing steps or dependencies -6. Breaking changes -${priorFindings ? "7. Verify ALL prior findings listed above have been addressed" : ""} - -For each issue found, classify as: -- CRITICAL: Must be fixed before implementation -- WARNING: Should be addressed but not blocking - -## Output Format -List each issue with its severity, description, and suggested fix. - -End with: -- "VERDICT: PASS" if no CRITICAL issues found -- "VERDICT: FAIL" if CRITICAL issues exist`; -} - -function buildCompletenessReviewPrompt(plan: string, priorFindings?: string): string { - const priorSection = priorFindings ? ` -## Prior Review Findings (from previous rounds) -The following issues were found in earlier review rounds. You MUST verify that EACH of these has been addressed in the current plan. If any remain unaddressed, re-list them as blocking gaps. - -${priorFindings} - -` : ""; - - return `You are a completeness and feasibility reviewer. - -## Plan -${plan} -${priorSection} -## Instructions -Check the plan for: -1. Missing implementation steps -2. Incorrect assumptions about the existing code -3. Missing test coverage -4. Integration gaps -5. Deployment or migration concerns -${priorFindings ? "6. Verify ALL prior findings listed above have been addressed" : ""} - -For each gap found, classify as: -- MISSING_STEP: A required step is not in the plan -- WRONG_ASSUMPTION: The plan assumes something incorrect about the codebase - -## Output Format -List each finding with classification and description. - -End with: -- "VERDICT: PASS" if the plan is complete and feasible -- "VERDICT: FAIL" if there are blocking gaps`; -} - -function buildPlanFixPrompt(plan: string, adversarialReview: string, completenessReview: string, priorFindings?: string): string { - const priorSection = priorFindings ? ` -## Previously Identified Issues (from earlier rounds) -These issues were found in earlier review rounds. Ensure they are ALSO addressed in your revision, not just the latest findings. - -${priorFindings} -` : ""; - - return `You are an expert plan fixer. Your job is to surgically revise an implementation plan to address ALL findings from two independent reviewers. - -## Current Plan -${plan} - -## Adversarial Review Findings -${adversarialReview} - -## Completeness Review Findings -${completenessReview} -${priorSection} -## Instructions -1. Read EVERY finding from both reviewers — CRITICAL, WARNING, and NOTE severity -2. For each finding, make a concrete change to the plan that fully addresses it -3. Do NOT rewrite the plan from scratch — preserve all parts that were not flagged -4. If a finding suggests a specific fix, incorporate it directly -5. If two findings conflict, prefer the safer/more correct approach -6. Ensure the revised plan is still coherent and self-consistent after all fixes -${priorFindings ? "7. Also verify that ALL previously identified issues (listed above) remain addressed" : ""} - -## Output Format -Output the COMPLETE revised plan (not just the diffs). The output must be a standalone, clean plan that can be handed directly to an implementer. Do NOT include a changelog, commentary, or summary of what was changed — just output the revised plan text and nothing else.`; -} - -function buildImplementationPrompt(plan: string, review1: string, review2: string, attachmentPaths: string[] = []): string { - const attachmentSection = attachmentPaths.length > 0 - ? `\n\n## Attached Images\nUse the Read tool to view these images for visual context:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}` - : ""; - - return `Implement the following plan. Follow it precisely, incorporating the review feedback. - -## Implementation Plan -${plan} - -## Review Feedback to Address -### Adversarial Review -${review1} - -### Completeness Review -${review2} -${attachmentSection} - -## Instructions -1. Implement each step of the plan -2. Address all review feedback -3. Write tests for new functionality -4. Ensure all existing tests still pass -5. CRITICAL: You MUST commit all changes before finishing. Run \`git add -A && git commit -m "feat: "\`. Uncommitted changes will be lost. - -Do NOT create a PR — that will be done in a separate step.`; -} - -// ── Specialist code review prompts (READ-ONLY) ────────────── - -function buildBugsLogicReviewPrompt(defaultBranch: string): string { - return `You are a specialist code reviewer focused on BUGS AND LOGIC ERRORS. -Your job is to FIND defects — do NOT modify any files. - -## Instructions -1. Run \`git diff ${defaultBranch}...HEAD\` to see all changes -2. Read every changed file in full for context -3. For each change, actively try to break it: - - Logic errors, wrong conditions, inverted booleans, off-by-one - - Null/undefined handling gaps - - Race conditions and concurrency bugs - - Missing error handling, swallowed errors - - Boundary conditions (empty, zero, MAX_INT, very large inputs) -4. DO NOT modify any files. You are a READ-ONLY reviewer. - -## Output Format -For each issue found: -- **Severity**: CRITICAL / WARNING / NOTE -- **File**: exact file path and line number -- **Bug**: What's wrong (be specific) -- **Proof**: Input or scenario that triggers the bug -- **Fix**: Suggested code change - -End with: -- "VERDICT: PASS" if no CRITICAL issues found -- "VERDICT: FAIL" if CRITICAL issues exist`; -} - -function buildSecurityEdgeCasesReviewPrompt(defaultBranch: string): string { - return `You are a specialist code reviewer focused on SECURITY AND EDGE CASES. -Your job is to FIND vulnerabilities — do NOT modify any files. - -## Instructions -1. Run \`git diff ${defaultBranch}...HEAD\` to see all changes -2. Read every changed file in full for context -3. Analyze from an attacker's perspective: - - Injection (SQL, command, XSS, path traversal, SSRF) - - Authentication/authorization bypasses - - Sensitive data exposure in logs, errors, responses - - Input validation gaps (malformed input, special chars, huge strings) - - Denial of service vectors (regex DoS, unbounded queries) - - Edge cases: empty inputs, concurrent requests, partial failures -4. DO NOT modify any files. You are a READ-ONLY reviewer. - -## Output Format -For each issue found: -- **Severity**: CRITICAL / WARNING / NOTE -- **File**: exact file path and line number -- **Vulnerability**: What's the issue -- **Attack scenario**: How to exploit it -- **Fix**: Suggested remediation - -End with: -- "VERDICT: PASS" if no CRITICAL issues found -- "VERDICT: FAIL" if CRITICAL issues exist`; -} - -function buildDesignPerformanceReviewPrompt(defaultBranch: string): string { - return `You are a specialist code reviewer focused on DESIGN AND PERFORMANCE. -Your job is to FIND design issues — do NOT modify any files. - -## Instructions -1. Run \`git diff ${defaultBranch}...HEAD\` to see all changes -2. Read changed files and related files for context -3. Evaluate: - - Violations of existing code patterns and conventions - - Missing or inadequate test coverage - - API design issues (breaking changes, inconsistent interfaces) - - Performance problems (N+1 queries, unnecessary work, large allocations) - - Code duplication or missing abstractions - - Backwards compatibility concerns -4. DO NOT modify any files. You are a READ-ONLY reviewer. - -## Output Format -For each issue found: -- **Severity**: CRITICAL / WARNING / NOTE -- **File**: exact file path and line number -- **Issue**: What's wrong -- **Impact**: Concrete consequence -- **Fix**: Suggested improvement - -End with: -- "VERDICT: PASS" if no CRITICAL issues found -- "VERDICT: FAIL" if CRITICAL issues exist`; -} - -function buildCodeFixPrompt( - defaultBranch: string, - bugsReview: string, - securityReview: string, - designReview: string, -): string { - return `Fix ALL issues identified by the code reviewers below. - -## Review Findings - -### Bugs & Logic Review -${bugsReview} - -### Security & Edge Cases Review -${securityReview} - -### Design & Performance Review -${designReview} - -## Instructions -1. Run \`git diff ${defaultBranch}...HEAD\` to see current changes -2. Fix every CRITICAL finding listed above -3. Fix WARNING findings where the fix is straightforward -4. Run tests after each fix to ensure no regressions -5. CRITICAL: You MUST commit all fixes before finishing. Run \`git add -A && git commit -m "fix: "\`. Uncommitted changes will be lost. -6. Do NOT create a PR - -End with: -- "VERDICT: FIXED" if all CRITICAL issues were addressed -- "VERDICT: PARTIAL" if some could not be fixed (explain why)`; -} - -/** Verify worktree is clean after parallel read-only reviewers. Reset if dirty. */ -function ensureWorktreeClean(worktreeDir: string): void { - try { - const status = execFileSync("git", ["status", "--porcelain"], { - cwd: worktreeDir, encoding: "utf-8", - }).trim(); - if (status) { - console.warn("[pipeline] Review agents modified worktree unexpectedly, resetting"); - execFileSync("git", ["reset", "--hard", "HEAD"], { cwd: worktreeDir, stdio: "ignore" }); - execFileSync("git", ["clean", "-fd"], { cwd: worktreeDir, stdio: "ignore" }); - } - } catch (err) { - console.error("[pipeline] ensureWorktreeClean failed:", err); - } -} - -/** - * Auto-commit any uncommitted changes left behind by a phase. - * Prevents ensureWorktreeClean() from wiping real implementation work. - * Returns true if an auto-commit was created, false if worktree was already clean. - */ -function autoCommitUncommittedChanges(worktreeDir: string, commitMessage: string): boolean { - try { - const status = execFileSync("git", ["status", "--porcelain"], { - cwd: worktreeDir, encoding: "utf-8", - }).trim(); - - if (!status) return false; - - // Porcelain format: 2-char status prefix + space + path (e.g., "?? file.txt", " M file.txt") - const lines = status.split("\n").filter(Boolean); - console.warn(`[pipeline] Auto-committing ${lines.length} uncommitted changes:`); - for (const l of lines) console.warn(` ${l}`); - - // Stage tracked file modifications - execFileSync("git", ["add", "-u"], { cwd: worktreeDir, stdio: "ignore" }); - - // Stage genuinely new files, skipping secrets/artifacts - const untracked = lines.filter(l => l.startsWith("??")); - const toStage: string[] = []; - for (const line of untracked) { - // Porcelain format: path starts at index 3. Git quotes paths with spaces/unicode. - let filePath = line.slice(3); - if (filePath.startsWith('"') && filePath.endsWith('"')) { - filePath = filePath.slice(1, -1).replace(/\\"/g, '"').replace(/\\\\/g, "\\"); - } - if (SENSITIVE_FILE_PATTERN.test(filePath)) { - console.warn(`[pipeline] Skipping suspicious file: ${filePath}`); - continue; - } - toStage.push(filePath); - } - if (toStage.length) { - execFileSync("git", ["add", "--", ...toStage], { cwd: worktreeDir, stdio: "ignore" }); - } - - execFileSync("git", ["commit", "-m", commitMessage], { - cwd: worktreeDir, encoding: "utf-8", - }); - return true; - } catch (err) { - console.error("[pipeline] autoCommitUncommittedChanges failed:", err); - // Unstage to leave worktree in a predictable state for retry - try { execFileSync("git", ["reset", "HEAD"], { cwd: worktreeDir, stdio: "ignore" }); } catch { /* ignore */ } - return false; - } -} - -/** Check whether the branch has any commits beyond the base branch. */ -function hasBranchCommits(worktreeDir: string, baseBranch: string): boolean { - try { - const log = execFileSync("git", ["log", `${baseBranch}..HEAD`, "--oneline"], { - cwd: worktreeDir, encoding: "utf-8", - }).trim(); - return log.length > 0; - } catch (err) { - console.error(`[pipeline] Cannot compare against base branch '${baseBranch}':`, err); - return false; - } -} - -function buildPrCreationPrompt(title: string, description: string, defaultBranch: string, attachmentPaths: string[] = []): string { - const attachmentSection = attachmentPaths.length > 0 - ? `\n\n## Attached Images\nUse the Read tool to view these images for visual context when writing the PR description:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}` - : ""; - - return `Create a pull request for the changes on this branch. - -## Issue Details -Title: ${title} -Description: ${description} -${attachmentSection} - -## Instructions -1. Push the current branch to the remote -2. Create a PR using \`gh pr create\` targeting ${defaultBranch} -3. Use a descriptive title based on the issue -4. Include a summary of changes in the PR body -5. Include the issue description for context - -Output the PR URL when done.`; -} - -// ── Main pipeline ───────────────────────────────────────────── - -export async function runIssuePipeline( - issueId: string, - transportConfig: IssuesTransportConfig -): Promise { - const [issue] = await db.select().from(issues).where(eq(issues.id, issueId)); - if (!issue) throw new Error(`Issue ${issueId} not found`); - - const [repo] = await db.select().from(repositories).where(eq(repositories.id, issue.repositoryId)); - if (!repo) throw new Error(`Repository not found for issue ${issueId}`); - - // Pre-flight: verify repo exists and is a git repo - if (!existsSync(repo.localRepoPath)) { - await failIssue(issueId, `Repository path does not exist: ${repo.localRepoPath}`); - return; - } - try { - execFileSync("git", ["rev-parse", "--git-dir"], { cwd: repo.localRepoPath, stdio: "ignore" }); - } catch { - await failIssue(issueId, `Not a git repository: ${repo.localRepoPath}`); - return; - } - - // Pre-flight: verify gh CLI is available - try { - execFileSync("gh", ["auth", "status"], { cwd: repo.localRepoPath, stdio: "ignore" }); - } catch { - await failIssue(issueId, "gh CLI not authenticated. Run: gh auth login"); - return; - } - - // Create or reuse worktree - const slug = issue.title.toLowerCase().replace(/[^a-z0-9]+/g, "-").substring(0, 40); - const shortId = issue.id.substring(0, 8); - let branchName = issue.branchName || `issue/${slug}-${shortId}`; - let worktreeDir = issue.worktreePath || buildWorktreePath(repo.localRepoPath, slug, shortId); - - // Skip worktree creation if it already exists (retry/resume scenario) - if (!existsSync(worktreeDir)) { - mkdirSync(join(repo.localRepoPath, ".claude", "worktrees"), { recursive: true }); - - // Fetch latest default branch so worktree starts from current remote code - try { - execFileSync("git", ["fetch", "origin", repo.defaultBranch], { - cwd: repo.localRepoPath, stdio: "ignore", timeout: 30_000, - }); - } catch { - // Non-fatal: proceed with last-known origin/ or local state. - // Common reasons: offline, no remote named 'origin', non-default refspec. - console.warn(`[pipeline] Could not fetch latest ${repo.defaultBranch} — will use last-known origin/${repo.defaultBranch}`); - } - - try { - execFileSync("git", ["worktree", "add", worktreeDir, "-b", branchName, `origin/${repo.defaultBranch}`], { - cwd: repo.localRepoPath, stdio: "ignore", - }); - } catch { - try { - execFileSync("git", ["worktree", "add", worktreeDir, branchName], { - cwd: repo.localRepoPath, stdio: "ignore", - }); - } catch (e) { - await failIssue(issueId, `Failed to create worktree: ${e}`); - return; - } - } - } - - const phaseSessionIds: Record = issue.phaseSessionIds as Record || {}; - - // Determine start phase (resume support) - const startPhase = issue.currentPhase > 0 ? issue.currentPhase : 1; - - // Check if --resume is supported (cached in appSettings, globalThis for HMR) - const resumeSupported = await isResumeSupported(); - - // Living planning session: created in Phase 1 iter 1, resumed across iterations + Phase 4 - let planningSessionId = issue.planningSessionId || crypto.randomUUID(); - let isFirstPlanRun = !issue.planningSessionId; // true = --session-id (create), false = --resume - - // Defer planningSessionId write until after first successful phase (avoids stale UUID on early failure) - await db.update(issues).set({ - worktreePath: worktreeDir, - branchName, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - - try { - // ── Phases 1-3: Planning + Reviews ───────────────────── - // Guard covers phases 1-3 since they're part of the planning loop - if (startPhase <= 3) { - if (await isCancelled(issueId)) return; - await updatePhase(issueId, 1, "planning"); - await notify(issueId, transportConfig, `Planning started for: ${escapeHtml(issue.title)}`); - - let planOutput = ""; - let planIterations = 0; - let planApproved = false; - let skipPlanning = false; // Set after plan-fix to go directly to re-review - const priorPlanFindings: string[] = []; // Accumulated findings from previous review rounds - - while (!planApproved && planIterations < MAX_PLAN_ITERATIONS) { - if (!skipPlanning) { - // Hoist DB queries above the branching logic (avoids duplication) - const [currentIssue] = await db.select().from(issues).where(eq(issues.id, issueId)); - const userAnswers = await getUserAnswers(issueId); - // Re-query attachments each iteration (user may add photos via Q&A replies) - const attachments = await getIssueAttachments(issueId); - const attachmentPaths = attachments.map(a => a.filePath); - - // Build the full prompt (used for fresh sessions and as fallback) - const freshPrompt = buildFullPlanningPrompt( - issue.description, planOutput, currentIssue?.planReview1, currentIssue?.planReview2, userAnswers, attachmentPaths, - ); - - // Run Phase 1 — create, resume, or fresh fallback - let planResult: PipelinePhaseResult; - - if (isFirstPlanRun) { - // CREATE the planning session - planResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: freshPrompt, - systemPrompt: "You are an expert implementation planner. Create detailed, actionable plans.", - timeoutMs: PHASE_TIMEOUT_MS, - sessionId: planningSessionId, - }); - isFirstPlanRun = false; - } else if (resumeSupported) { - // RESUME the planning session (keeps exploration context!) - const resumePrompt = buildResumePlanningPrompt( - currentIssue?.planReview1, currentIssue?.planReview2, userAnswers, attachmentPaths, - ); - planResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: resumePrompt, - timeoutMs: PHASE_TIMEOUT_MS, - resumeSessionId: planningSessionId, - }); - - // If resume failed (not timeout), fall back to fresh session with full context - if (!planResult.success && !planResult.timedOut) { - console.log("[pipeline] Planning resume failed, falling back to fresh session"); - const fresh = await createFreshPlanningSession(worktreeDir, freshPrompt, issueId); - planResult = fresh.result; - planningSessionId = fresh.sessionId; - } - } else { - // Resume not supported — fresh session each iteration (current behavior) - const fresh = await createFreshPlanningSession(worktreeDir, freshPrompt, issueId); - planResult = fresh.result; - planningSessionId = fresh.sessionId; - } - - if (!planResult.success) { - await failIssue(issueId, `Planning failed: ${planResult.output.substring(0, 2000)}`); - return; - } - - // Store iteration-indexed session IDs (keep "1" pointing to latest for CLI resume) - const planIterKey = planIterations > 0 ? `.${planIterations + 1}` : ""; - if (planResult.sessionId) phaseSessionIds[`1${planIterKey}`] = planResult.sessionId; - phaseSessionIds["1"] = planResult.sessionId!; - planOutput = planResult.output; - await db.update(issues).set({ - planOutput, - planningSessionId, - phaseSessionIds, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - - // Handle questions - if (planResult.hasQuestions && planResult.questions) { - const answered = await handleQuestions(issueId, planResult.questions, transportConfig); - if (!answered) { - await failIssue(issueId, "Timed out waiting for user reply to questions"); - return; - } - continue; - } - } else { - skipPlanning = false; - } - - // Count this as a plan iteration (questions don't consume iterations) - planIterations++; - - // ── Phase 2: Plan Verification (2 reviewers in parallel) ── - if (await isCancelled(issueId)) return; - await updatePhase(issueId, 2, "reviewing_plan_1"); - await notify(issueId, transportConfig, `Plan verification started (2 reviewers in parallel)`); - - const priorFindingsText = priorPlanFindings.length > 0 - ? priorPlanFindings.join("\n\n========================================\n\n") - .substring(0, MAX_FALLBACK_CHARS) - : undefined; - - const planReviewResults = await Promise.allSettled([ - runClaudePhase({ - workdir: worktreeDir, - prompt: buildAdversarialReviewPrompt(planOutput, priorFindingsText), - systemPrompt: "You are an adversarial plan reviewer. Find problems, not validate.", - timeoutMs: PHASE_TIMEOUT_MS, - }), - runClaudePhase({ - workdir: worktreeDir, - prompt: buildCompletenessReviewPrompt(planOutput, priorFindingsText), - systemPrompt: "You are a completeness and feasibility reviewer. Find gaps.", - timeoutMs: PHASE_TIMEOUT_MS, - }), - ]); - const review1Result = settledResult(planReviewResults[0]); - const review2Result = settledResult(planReviewResults[1]); - - // Store iteration-indexed review session IDs (keep "2"/"3" pointing to latest for CLI resume) - const reviewIterKey = planIterations > 1 ? `.${planIterations}` : ""; - if (review1Result.sessionId) phaseSessionIds[`2${reviewIterKey}`] = review1Result.sessionId; - if (review2Result.sessionId) phaseSessionIds[`3${reviewIterKey}`] = review2Result.sessionId; - if (review1Result.sessionId) phaseSessionIds["2"] = review1Result.sessionId; - if (review2Result.sessionId) phaseSessionIds["3"] = review2Result.sessionId; - // Accumulate reviews across iterations (prefix with round number for context) - const roundReview1 = planIterations > 1 - ? `# Plan Review Round ${planIterations} - Adversarial\n${review1Result.output}` - : review1Result.output; - const roundReview2 = planIterations > 1 - ? `# Plan Review Round ${planIterations} - Completeness\n${review2Result.output}` - : review2Result.output; - - const [prevIssue] = await db.select({ - pr1: issues.planReview1, - pr2: issues.planReview2, - }).from(issues).where(eq(issues.id, issueId)); - - // Newest round first so truncation drops stale rounds, not the latest - const accumulatedReview1 = planIterations === 1 - ? roundReview1 - : (roundReview1 + "\n\n========================================\n\n" + (prevIssue?.pr1 || "")) - .substring(0, MAX_FALLBACK_CHARS); - const accumulatedReview2 = planIterations === 1 - ? roundReview2 - : (roundReview2 + "\n\n========================================\n\n" + (prevIssue?.pr2 || "")) - .substring(0, MAX_FALLBACK_CHARS); - - await db.update(issues).set({ - planReview1: accumulatedReview1, - planReview2: accumulatedReview2, - phaseSessionIds, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - - // Check if EITHER reviewer found CRITICAL issues (VERDICT: FAIL) - const review1Failed = /VERDICT:\s*FAIL/i.test(review1Result.output); - const review2Failed = /VERDICT:\s*FAIL/i.test(review2Result.output); - - if (review1Failed || review2Failed) { - // Accumulate findings for subsequent review rounds - const roundFindings = [ - review1Failed ? `### Round ${planIterations} - Adversarial Review CRITICALs\n${review1Result.output}` : "", - review2Failed ? `### Round ${planIterations} - Completeness Review CRITICALs\n${review2Result.output}` : "", - ].filter(Boolean).join("\n\n"); - priorPlanFindings.push(roundFindings); - - if (planIterations >= MAX_PLAN_ITERATIONS) break; - if (await isCancelled(issueId)) return; - - // ── Plan Fix: surgically address review findings ── - await notify(issueId, transportConfig, - `Plan review round ${planIterations} failed. Fixing plan before attempt ${planIterations + 1}...` - ); - - const priorFindingsForFix = priorPlanFindings.length > 1 - ? priorPlanFindings.slice(0, -1).join("\n\n") - : undefined; - const capPerInput = Math.floor(MAX_FALLBACK_CHARS / (priorFindingsForFix ? 4 : 3)) - 500; - const fixPrompt = buildPlanFixPrompt( - planOutput.substring(0, capPerInput), - review1Result.output.substring(0, capPerInput), - review2Result.output.substring(0, capPerInput), - priorFindingsForFix?.substring(0, capPerInput), - ); - - // Always use a fresh session for fixes — resumed sessions respond - // conversationally and fail to produce structured plan output - const fixResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: fixPrompt, - systemPrompt: "You are an expert plan fixer. Surgically revise the plan to address all review findings. Output ONLY the complete revised plan text with no commentary.", - timeoutMs: PHASE_TIMEOUT_MS, - }); - - // Store fix session ID for debugging - if (fixResult.sessionId) { - phaseSessionIds[`fix.${planIterations}`] = fixResult.sessionId; - await db.update(issues).set({ phaseSessionIds, updatedAt: new Date() }) - .where(eq(issues.id, issueId)); - } - console.log(`[pipeline] Plan fix iteration ${planIterations} (session ${fixResult.sessionId}): success=${fixResult.success}, output=${fixResult.output.length} chars`); - - if (fixResult.success && fixResult.output.trim()) { - // Accept the fix output as the new plan — the next review round is the quality gate - planOutput = fixResult.output - .replace(/\n*VERDICT:\s*(READY|PASS|FAIL)[^\n]*/gi, "") - .trim(); - await db.update(issues).set({ - planOutput, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - console.log(`[pipeline] Plan updated from fix (iteration ${planIterations}), ${planOutput.length} chars`); - skipPlanning = true; // Skip planning, go straight to re-review - } else { - console.warn(`[pipeline] Plan fix failed (success=${fixResult.success}). Falling back to re-planning.`); - // Don't set skipPlanning — let the next iteration re-run planning with review feedback - } - - continue; - } - - planApproved = true; - } - - if (!planApproved) { - await failIssue(issueId, `Plan could not pass review after ${MAX_PLAN_ITERATIONS} attempts`); - await notify(issueId, transportConfig, `Planning failed after ${MAX_PLAN_ITERATIONS} attempts for: ${escapeHtml(issue.title)}`); - return; - } - - await notify(issueId, transportConfig, `Plan approved. Starting implementation...`); - } - - // ── Phase 4: Implementation (resume planning session if possible) ── - if (startPhase <= 4) { - if (await isCancelled(issueId)) return; - await updatePhase(issueId, 4, "implementing"); - - const [currentIssue] = await db.select().from(issues).where(eq(issues.id, issueId)); - // Re-query attachments (user may have added photos via Q&A replies since planning) - const implAttachments = await getIssueAttachments(issueId); - const implAttachmentPaths = implAttachments.map(a => a.filePath); - let implPrompt = buildImplementationPrompt( - currentIssue?.planOutput || "", - currentIssue?.planReview1 || "", - currentIssue?.planReview2 || "", - implAttachmentPaths, - ); - const userAnswers = await getUserAnswers(issueId); - if (userAnswers) { - implPrompt += `\n\n## Additional Context from User\n${userAnswers}`; - } - - // Resume the planning session if the session exists and --resume is supported. - // Safe even on crash-resume: if resume fails, the fallback handles it below. - const canResume = ( - startPhase <= 4 && - currentIssue?.planningSessionId && - resumeSupported - ); - - let implResult: PipelinePhaseResult; - - if (canResume) { - implResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: implPrompt, - timeoutMs: IMPL_TIMEOUT_MS, - resumeSessionId: currentIssue!.planningSessionId!, - }); - - // If resume failed (not timeout), retry with fresh session - if (!implResult.success && !implResult.timedOut) { - console.log("[pipeline] Implementation resume failed, retrying with fresh session"); - implResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: implPrompt, - systemPrompt: "You are an expert software engineer. Implement the plan precisely.", - timeoutMs: IMPL_TIMEOUT_MS, - }); - } - } else { - // Fresh session (crash recovery, retry, or resume not supported) - implResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: implPrompt, - systemPrompt: "You are an expert software engineer. Implement the plan precisely.", - timeoutMs: IMPL_TIMEOUT_MS, - }); - } - - if (!implResult.success) { - await failIssue(issueId, `Implementation failed: ${implResult.output.substring(0, 2000)}`); - return; - } - - phaseSessionIds["4"] = implResult.sessionId!; - await db.update(issues).set({ phaseSessionIds, updatedAt: new Date() }).where(eq(issues.id, issueId)); - - // ── Commit gate: ensure implementation actually committed ── - autoCommitUncommittedChanges(worktreeDir, - "feat: implement changes\n\nAuto-committed by pipeline — implementation phase did not commit."); - if (!hasBranchCommits(worktreeDir, repo.defaultBranch)) { - await failIssue(issueId, "Implementation produced no changes — no commits found beyond base branch."); - return; - } - - await notify(issueId, transportConfig, `Implementation complete. Starting code review...`); - } - - // ── Phases 5-6: Adversarial Code Review + Auto-Fix Loop ── - if (startPhase <= 6) { - let codeApproved = false; - let crIterations = 0; - - while (!codeApproved && crIterations < MAX_CODE_REVIEW_ITERATIONS) { - crIterations++; - - // ── Phase 5: 3 specialist reviewers in parallel (READ-ONLY) ── - if (await isCancelled(issueId)) return; - await updatePhase(issueId, 5, "reviewing_code_1"); - await notify(issueId, transportConfig, - `Code review round ${crIterations}/${MAX_CODE_REVIEW_ITERATIONS} (3 specialist reviewers)` - ); - - const codeReviewResults = await Promise.allSettled([ - runClaudePhase({ - workdir: worktreeDir, - prompt: buildBugsLogicReviewPrompt(repo.defaultBranch), - systemPrompt: "You are a bugs & logic reviewer. DO NOT modify files.", - timeoutMs: PHASE_TIMEOUT_MS, - }), - runClaudePhase({ - workdir: worktreeDir, - prompt: buildSecurityEdgeCasesReviewPrompt(repo.defaultBranch), - systemPrompt: "You are a security reviewer. DO NOT modify files.", - timeoutMs: PHASE_TIMEOUT_MS, - }), - runClaudePhase({ - workdir: worktreeDir, - prompt: buildDesignPerformanceReviewPrompt(repo.defaultBranch), - systemPrompt: "You are a design & performance reviewer. DO NOT modify files.", - timeoutMs: PHASE_TIMEOUT_MS, - }), - ]); - const bugsResult = settledResult(codeReviewResults[0]); - const securityResult = settledResult(codeReviewResults[1]); - const designResult = settledResult(codeReviewResults[2]); - - // Verify reviewers didn't modify the worktree - ensureWorktreeClean(worktreeDir); - - // Combine reviews with per-reviewer caps to stay under MAX_FALLBACK_CHARS - const capPerReviewer = Math.floor(MAX_FALLBACK_CHARS / 3) - 200; - const roundReview = [ - `# Code Review Round ${crIterations}`, - "## Bugs & Logic Review\n" + bugsResult.output.substring(0, capPerReviewer), - "## Security & Edge Cases Review\n" + securityResult.output.substring(0, capPerReviewer), - "## Design & Performance Review\n" + designResult.output.substring(0, capPerReviewer), - ].join("\n\n---\n\n"); - - // Accumulate reviews across iterations (don't overwrite prior rounds) - const [prevIssue] = await db.select({ cr1: issues.codeReview1 }).from(issues).where(eq(issues.id, issueId)); - const accumulatedReview = crIterations === 1 - ? roundReview - : ((prevIssue?.cr1 || "") + "\n\n========================================\n\n" + roundReview).substring(0, MAX_FALLBACK_CHARS); - - // Store all 3 reviewer session IDs with iteration indexing - const crIterKey = crIterations > 1 ? `.${crIterations}` : ""; - if (bugsResult.sessionId) phaseSessionIds[`5a${crIterKey}`] = bugsResult.sessionId; - if (securityResult.sessionId) phaseSessionIds[`5b${crIterKey}`] = securityResult.sessionId; - if (designResult.sessionId) phaseSessionIds[`5c${crIterKey}`] = designResult.sessionId; - // Keep "5" pointing to latest for CLI resume - if (bugsResult.sessionId) phaseSessionIds["5"] = bugsResult.sessionId; - await db.update(issues).set({ - codeReview1: accumulatedReview, - phaseSessionIds, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - - // Check if all reviewers passed - const anyFailed = [bugsResult, securityResult, designResult].some( - r => /VERDICT:\s*FAIL/i.test(r.output) - ); - - if (!anyFailed) { - codeApproved = true; - await notify(issueId, transportConfig, `All code reviews passed!`); - break; - } - - if (crIterations >= MAX_CODE_REVIEW_ITERATIONS) break; - - // ── Phase 6: Auto-fix all issues ── - if (await isCancelled(issueId)) return; - await updatePhase(issueId, 6, "reviewing_code_2"); - await notify(issueId, transportConfig, - `Fixing code review findings (round ${crIterations}/${MAX_CODE_REVIEW_ITERATIONS})...` - ); - - // Track HEAD before fix for convergence detection - let headBefore = ""; - try { - headBefore = execFileSync("git", ["rev-parse", "HEAD"], { - cwd: worktreeDir, encoding: "utf-8", - }).trim(); - } catch { /* ignore */ } - - const fixResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: buildCodeFixPrompt( - repo.defaultBranch, - bugsResult.output, - securityResult.output, - designResult.output, - ), - systemPrompt: "You are an expert software engineer. Fix all identified issues.", - timeoutMs: IMPL_TIMEOUT_MS, - }); - - // Accumulate fix outputs across iterations - const [prevFix] = await db.select({ cr2: issues.codeReview2 }).from(issues).where(eq(issues.id, issueId)); - const fixOutput = `# Fix Round ${crIterations}\n${fixResult.output}`; - const accumulatedFixes = crIterations === 1 - ? fixOutput - : ((prevFix?.cr2 || "") + "\n\n========================================\n\n" + fixOutput).substring(0, MAX_FALLBACK_CHARS); - - // Store iteration-indexed fix session IDs - const fixIterKey = crIterations > 1 ? `.${crIterations}` : ""; - if (fixResult.sessionId) phaseSessionIds[`6${fixIterKey}`] = fixResult.sessionId; - // Keep "6" pointing to latest for CLI resume - if (fixResult.sessionId) phaseSessionIds["6"] = fixResult.sessionId; - await db.update(issues).set({ - codeReview2: accumulatedFixes, - phaseSessionIds, - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - - if (!fixResult.success) { - await failIssue(issueId, `Code fix failed: ${fixResult.output.substring(0, 2000)}`); - return; - } - - // Convergence check: did the fix agent make any commits? - // Must run BEFORE auto-commit so we measure the agent's own progress. - try { - const headAfter = execFileSync("git", ["rev-parse", "HEAD"], { - cwd: worktreeDir, encoding: "utf-8", - }).trim(); - if (headBefore && headBefore === headAfter) { - // Auto-commit any leftover changes before breaking, so they aren't lost - autoCommitUncommittedChanges(worktreeDir, - "fix: address code review findings\n\nAuto-committed by pipeline — fix phase did not commit."); - await notify(issueId, transportConfig, `Fix agent made no new commits. Stopping review loop.`); - break; - } - } catch { /* ignore */ } - - // Auto-commit any remaining uncommitted changes from the fix agent - autoCommitUncommittedChanges(worktreeDir, - "fix: address code review findings\n\nAuto-committed by pipeline — fix phase did not commit."); - - await notify(issueId, transportConfig, `Fixes applied. Re-reviewing...`); - } - - if (!codeApproved) { - await notify(issueId, transportConfig, - `Code review reached max iterations (${MAX_CODE_REVIEW_ITERATIONS}). Proceeding to PR.` - ); - } - } - - // ── Phase 7: PR Creation ─────────────────────────────── - if (startPhase <= 7) { - if (await isCancelled(issueId)) return; - await updatePhase(issueId, 7, "creating_pr"); - - const prAttachments = await getIssueAttachments(issueId); - const prAttachmentPaths = prAttachments.map(a => a.filePath); - const prResult = await runClaudePhase({ - workdir: worktreeDir, - prompt: buildPrCreationPrompt(issue.title, issue.description, repo.defaultBranch, prAttachmentPaths), - systemPrompt: "Create a pull request using the gh CLI.", - timeoutMs: PHASE_TIMEOUT_MS, - }); - - phaseSessionIds["7"] = prResult.sessionId!; - - if (!prResult.success) { - await db.update(issues).set({ phaseSessionIds, status: "failed", error: `PR creation failed: ${prResult.output.substring(0, 2000)}`, updatedAt: new Date() }).where(eq(issues.id, issueId)); - await notify(issueId, transportConfig, `PR creation failed for: ${escapeHtml(issue.title)}\n${escapeHtml(prResult.output.substring(0, 200))}`); - return; - } - - const prUrlMatch = prResult.output.match(/https:\/\/github\.com\/[\w.\-]+\/[\w.\-]+\/pull\/\d+/); - const prUrl = prUrlMatch?.[0] || null; - - if (!prUrl) { - await db.update(issues).set({ phaseSessionIds, status: "failed", error: `PR creation succeeded but no PR URL found in output. Claude may have failed to push or create the PR.\n\nOutput (truncated): ${prResult.output.substring(0, 2000)}`, updatedAt: new Date() }).where(eq(issues.id, issueId)); - await notify(issueId, transportConfig, `PR creation failed for: ${escapeHtml(issue.title)}\nNo PR URL found in Claude output.`); - return; - } - - // Fetch PR summary from GitHub (the PR body Claude wrote via gh pr create) - let prSummary = prResult.output; - try { - const prJson = execFileSync("gh", ["pr", "view", prUrl, "--json", "title,body"], { - cwd: repo.localRepoPath, - encoding: "utf-8", - timeout: 15000, - }); - const prData = JSON.parse(prJson); - if (prData.body) { - prSummary = prData.body.substring(0, MAX_FALLBACK_CHARS); - } - } catch { - // Fallback: keep raw Claude output as prSummary - } - - await db.update(issues).set({ - status: "completed", - prUrl, - prStatus: "open", - prSummary, - phaseSessionIds, - completedAt: new Date(), - updatedAt: new Date(), - }).where(eq(issues.id, issueId)); - - // Send completion message and store it in issueMessages so the user - // can reply to continue the conversation in the same Claude session - const completionHtml = `Issue completed: ${escapeHtml(issue.title)}\nPR: ${escapeHtml(prUrl)}\n\nReply to this message to continue the conversation.`; - const completionPlain = `Issue completed: ${issue.title}\nPR: ${prUrl}\n\nReply to this message to continue the conversation.`; - try { - if (transportConfig.kind === "telegram") { - const msgId = await sendTelegramMessageWithId(transportConfig, completionHtml); - await db.insert(issueMessages).values({ - issueId, - direction: "from_claude", - message: completionPlain, - telegramMessageId: msgId, - }); - } else { - const result = await sendIssueTransportMessage(issueId, transportConfig, completionPlain); - await db.insert(issueMessages).values({ - issueId, - direction: "from_claude", - message: completionPlain, - slackMessageTs: result.slackTs, - }); - } - } catch (err) { - console.error("[pipeline] Failed to send completion notification:", err); - } - } - - } catch (err) { - await failIssue(issueId, String(err)); - await notify(issueId, transportConfig, `Pipeline failed for: ${escapeHtml(issue.title)}\nError: ${escapeHtml(String(err).substring(0, 200))}`); - } -} +// Thin re-export — the pipeline is split into modules under ./pipeline/ +export { runIssuePipeline, buildWorktreePath } from "./pipeline/orchestrator"; diff --git a/src/lib/issues/pipeline/claude-runner.ts b/src/lib/issues/pipeline/claude-runner.ts new file mode 100644 index 0000000..0dcc25e --- /dev/null +++ b/src/lib/issues/pipeline/claude-runner.ts @@ -0,0 +1,236 @@ +import { spawn } from "node:child_process"; +import { resolveClaudePath } from "@/lib/utils/resolve-claude-path"; +import { getSetting, setSetting } from "@/lib/db/app-settings"; +import type { PipelinePhaseResult } from "../types"; +import { PHASE_TIMEOUT_MS } from "../types"; + +export const MAX_FALLBACK_CHARS = 50_000; + +/** Allowed env var prefixes/names for Claude CLI child processes. */ +const ALLOWED_ENV_KEYS = new Set([ + "PATH", "HOME", "USER", "SHELL", "TERM", "LANG", "TMPDIR", "XDG_CONFIG_HOME", + "ANTHROPIC_API_KEY", "CLAUDE_API_KEY", "CLAUDE_CODE_API_KEY", + "GH_TOKEN", "GITHUB_TOKEN", +]); + +/** Build a minimal env for Claude CLI — only pass through what's needed. */ +export function buildClaudeEnv(): NodeJS.ProcessEnv { + const env: Record = {}; + for (const key of ALLOWED_ENV_KEYS) { + if (process.env[key]) env[key] = process.env[key]!; + } + return env as unknown as NodeJS.ProcessEnv; +} + +// ── Resume capability check (appSettings-cached, globalThis for HMR) ── + +const _g = globalThis as unknown as { _resumeCheckPromise?: Promise; _resumeCheckAt?: number }; +const RESUME_CHECK_IN_MEMORY_TTL = 60 * 60 * 1000; // 1 hour — re-check DB after this + +export async function isResumeSupported(): Promise { + // Clear stale in-memory cache so DB TTL takes effect for long-running processes + if (_g._resumeCheckPromise && _g._resumeCheckAt && Date.now() - _g._resumeCheckAt > RESUME_CHECK_IN_MEMORY_TTL) { + _g._resumeCheckPromise = undefined; + } + if (!_g._resumeCheckPromise) { + _g._resumeCheckAt = Date.now(); + _g._resumeCheckPromise = doResumeCheck().catch((err) => { + console.error("[pipeline] Resume check failed, will retry:", err); + _g._resumeCheckPromise = undefined; + return false; + }); + } + return _g._resumeCheckPromise; +} + +async function doResumeCheck(): Promise { + // Check DB cache first (survives process restarts) + const cached = getSetting("claude-resume-supported"); + const checkedAt = getSetting("claude-resume-checked-at"); + + if (cached !== null && checkedAt) { + const supported = cached === "true"; + const age = Date.now() - new Date(checkedAt).getTime(); + // Cache true for 7 days; cache false for only 1 hour (self-heals after transient failures) + const ttl = supported ? 7 * 24 * 60 * 60 * 1000 : 60 * 60 * 1000; + if (age < ttl) { + console.log(`[pipeline] Resume capability cached: ${supported}`); + return supported; + } + } + + console.log("[pipeline] Checking --resume capability..."); + + // Run verification: create a session, then resume it + const testId = crypto.randomUUID(); + const create = await runClaudePhase({ + workdir: "/tmp", + prompt: "Reply with exactly: VERIFY_OK", + timeoutMs: 30_000, + sessionId: testId, + }); + if (!create.success || !create.output.includes("VERIFY_OK")) { + console.log("[pipeline] Resume check: create phase failed, marking unsupported"); + cacheResumeResult(false); + return false; + } + + const resume = await runClaudePhase({ + workdir: "/tmp", + prompt: "Reply with exactly: RESUME_OK", + timeoutMs: 30_000, + resumeSessionId: testId, + }); + const supported = resume.success && resume.output.includes("RESUME_OK"); + console.log(`[pipeline] Resume capability: ${supported}`); + cacheResumeResult(supported); + return supported; +} + +function cacheResumeResult(supported: boolean) { + setSetting("claude-resume-supported", String(supported)); + setSetting("claude-resume-checked-at", new Date().toISOString()); +} + +/** + * Run a single Claude CLI phase. + * Prompt is piped via stdin. Uses --session-id or --resume. + * Parses stream-json output for result text. + */ +export async function runClaudePhase(opts: { + workdir: string; + prompt: string; + systemPrompt?: string; + timeoutMs?: number; + sessionId?: string; + resumeSessionId?: string; +}): Promise { + // Compute once, use everywhere — no double-UUID risk + const effectiveSessionId = opts.resumeSessionId || opts.sessionId || crypto.randomUUID(); + + const args = [ + "-p", + "--verbose", + "--output-format", "stream-json", + "--dangerously-skip-permissions", + ]; + + if (opts.resumeSessionId) { + args.push("--resume", opts.resumeSessionId); + } else { + args.push("--session-id", effectiveSessionId); + } + + // System prompt only on creation (resumed sessions inherit it) + if (opts.systemPrompt && !opts.resumeSessionId) { + args.push("--append-system-prompt", opts.systemPrompt); + } + + const timeout = opts.timeoutMs || PHASE_TIMEOUT_MS; + + return new Promise((resolve) => { + const proc = spawn(resolveClaudePath(), args, { + cwd: opts.workdir, + env: buildClaudeEnv(), + }); + + proc.stdin!.write(opts.prompt); + proc.stdin!.end(); + + let buffer = ""; + let resultText = ""; + const assistantBlocks: string[] = []; + let assistantBlocksSize = 0; + let timedOut = false; + + const timer = setTimeout(() => { + timedOut = true; + proc.kill("SIGTERM"); + // Force kill if SIGTERM is ignored after 30s + setTimeout(() => { try { proc.kill("SIGKILL"); } catch { /* already dead */ } }, 30000); + }, timeout); + + proc.stdout!.on("data", (chunk: Buffer) => { + buffer += chunk.toString(); + // Cap buffer to prevent OOM from very long lines without newlines + if (buffer.length > 1_000_000) { + buffer = buffer.slice(-500_000); + } + const lines = buffer.split("\n"); + buffer = lines.pop() || ""; + + for (const line of lines) { + const trimmed = line.trim(); + if (!trimmed) continue; + try { + const event = JSON.parse(trimmed); + if (event.type === "result" && event.result) { + resultText = event.result; + } + if (event.type === "assistant" && event.message?.content) { + for (const block of event.message.content) { + if (block.type === "text" && block.text && assistantBlocksSize < MAX_FALLBACK_CHARS) { + assistantBlocks.push(block.text); + assistantBlocksSize += block.text.length; + } + } + } + } catch { /* skip non-JSON lines */ } + } + }); + + let stderrOutput = ""; + proc.stderr!.on("data", (chunk: Buffer) => { + stderrOutput += chunk.toString(); + if (stderrOutput.length > 10000) stderrOutput = stderrOutput.slice(-10000); + }); + + proc.on("close", (code) => { + clearTimeout(timer); + + // Process remaining buffer + if (buffer.trim()) { + try { + const event = JSON.parse(buffer.trim()); + if (event.type === "result" && event.result) resultText = event.result; + if (event.type === "assistant" && event.message?.content) { + for (const block of event.message.content) { + if (block.type === "text" && block.text && assistantBlocksSize < MAX_FALLBACK_CHARS) { + assistantBlocks.push(block.text); + assistantBlocksSize += block.text.length; + } + } + } + } catch { /* ignore */ } + } + + // Cap resultText to prevent unbounded DB writes + if (resultText.length > MAX_FALLBACK_CHARS) { + resultText = resultText.substring(0, MAX_FALLBACK_CHARS); + } + + let output = resultText.trim() || assistantBlocks.join("\n\n"); + if (timedOut) output = `[TIMEOUT after ${timeout / 1000}s] ${output}`; + if (!output && stderrOutput) output = stderrOutput; + + const hasQuestions = /##\s*Questions/i.test(output); + const questions = hasQuestions + ? output.substring(output.search(/##\s*Questions/i)) + : undefined; + + resolve({ + success: code === 0 && !timedOut, + output, + sessionId: effectiveSessionId, + hasQuestions, + questions, + timedOut, + }); + }); + + proc.on("error", (err) => { + clearTimeout(timer); + resolve({ success: false, output: err.message, sessionId: effectiveSessionId }); + }); + }); +} diff --git a/src/lib/issues/pipeline/helpers.ts b/src/lib/issues/pipeline/helpers.ts new file mode 100644 index 0000000..fb5028b --- /dev/null +++ b/src/lib/issues/pipeline/helpers.ts @@ -0,0 +1,258 @@ +import { execFileSync } from "node:child_process"; +import { db } from "@/lib/db"; +import { issues, issueMessages } from "@/lib/db/schema"; +import { eq, and, gt } from "drizzle-orm"; +import { sendTelegramMessageWithId, escapeHtml, TELEGRAM_SAFE_MSG_LEN } from "@/lib/notifications/telegram"; +import { sendSlackMessage, SLACK_SAFE_MSG_LEN } from "@/lib/notifications/slack"; +import type { PipelinePhaseResult, IssueStatus, IssuesTransportConfig } from "../types"; +import { QA_TIMEOUT_MS, PHASE_TIMEOUT_MS } from "../types"; +import { runClaudePhase } from "./claude-runner"; + +export function telegramMarkupToSlackText(text: string): string { + return text + .replace(/<\/?(?:b|i|code|pre)>/g, "") + .replace(/</g, "<") + .replace(/>/g, ">") + .replace(/&/g, "&") + .trim(); +} + +/** Files that should never be auto-committed. Tested against full path from git status. */ +const SENSITIVE_FILE_PATTERN = + /\.(env|pem|key|p12|pfx|jks|keystore)(\..*)?$|\.npmrc$|\.pypirc$|id_(rsa|ed25519|ecdsa|dsa)$|credentials\.json$/i; + +export async function updatePhase(issueId: string, phase: number, status: IssueStatus) { + await db.update(issues).set({ + currentPhase: phase, + status, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); +} + +export async function failIssue(issueId: string, error: string) { + await db.update(issues).set({ + status: "failed", + error: error.substring(0, 10000), + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); +} + +/** Check if the issue has been cancelled (status set to "failed" externally). */ +export async function isCancelled(issueId: string): Promise { + const [issue] = await db.select({ status: issues.status }).from(issues).where(eq(issues.id, issueId)); + return issue?.status === "failed"; +} + +export async function sendIssueTransportMessage( + issueId: string, + config: IssuesTransportConfig, + text: string +): Promise<{ messageId?: number; slackTs?: string }> { + if (config.kind === "telegram") { + const truncated = text.length > 4096 ? text.substring(0, 4093) + "..." : text; + const messageId = await sendTelegramMessageWithId(config, truncated); + return { messageId }; + } + + const [issue] = await db.select({ + slackChannelId: issues.slackChannelId, + slackThreadTs: issues.slackThreadTs, + }).from(issues).where(eq(issues.id, issueId)).limit(1); + + if (!issue?.slackChannelId || !issue.slackThreadTs) { + throw new Error("Slack issue thread metadata missing"); + } + + const result = await sendSlackMessage( + { botToken: config.botToken }, + issue.slackChannelId, + telegramMarkupToSlackText(text).substring(0, SLACK_SAFE_MSG_LEN), + issue.slackThreadTs + ); + + return { slackTs: result.ts }; +} + +export async function notify(issueId: string, config: IssuesTransportConfig, text: string) { + try { + await sendIssueTransportMessage(issueId, config, text); + } catch (err) { + console.error("Failed to send issue notification:", err); + } +} + +export async function handleQuestions( + issueId: string, + questions: string, + config: IssuesTransportConfig +): Promise { + const truncatedQ = config.kind === "telegram" && questions.length > TELEGRAM_SAFE_MSG_LEN + ? questions.substring(0, TELEGRAM_SAFE_MSG_LEN) + "..." + : questions; + + // Capture time BEFORE sending so we don't miss fast replies + const questionTime = new Date(); + + if (config.kind === "telegram") { + const msgId = await sendTelegramMessageWithId(config, + `Questions for issue ${issueId.substring(0, 8)}:\n\n${escapeHtml(truncatedQ)}\n\nReply to this message to answer.` + ); + + await db.insert(issueMessages).values({ + issueId, + direction: "from_claude", + message: questions, + telegramMessageId: msgId, + }); + } else { + const result = await sendIssueTransportMessage( + issueId, + config, + `Questions for issue ${issueId.substring(0, 8)}:\n\n${truncatedQ}\n\nReply in this Slack thread to answer.` + ); + + await db.insert(issueMessages).values({ + issueId, + direction: "from_claude", + message: questions, + slackMessageTs: result.slackTs, + }); + } + + await db.update(issues).set({ status: "waiting_for_input", updatedAt: new Date() }).where(eq(issues.id, issueId)); + + // Wait for reply (polling for user replies newer than the question) + const startWait = Date.now(); + while (Date.now() - startWait < QA_TIMEOUT_MS) { + // Check cancellation + if (await isCancelled(issueId)) return false; + + const [userReply] = await db.select().from(issueMessages) + .where(and( + eq(issueMessages.issueId, issueId), + eq(issueMessages.direction, "from_user"), + gt(issueMessages.createdAt, questionTime) + )) + .limit(1); + + if (userReply) return true; + await new Promise(r => setTimeout(r, 5000)); + } + + return false; +} + +export async function getUserAnswers(issueId: string): Promise { + const messages = await db.select().from(issueMessages) + .where(and(eq(issueMessages.issueId, issueId), eq(issueMessages.direction, "from_user"))) + .orderBy(issueMessages.createdAt); + + if (messages.length === 0) return null; + return messages.map(m => m.message).join("\n\n"); +} + +/** Extract a PipelinePhaseResult from a settled promise, returning a failure result on rejection. */ +export function settledResult(r: PromiseSettledResult): PipelinePhaseResult { + if (r.status === "fulfilled") return r.value; + return { success: false, output: `Agent failed: ${String(r.reason)}` }; +} + +/** Verify worktree is clean after parallel read-only reviewers. Reset if dirty. */ +export function ensureWorktreeClean(worktreeDir: string): void { + try { + const status = execFileSync("git", ["status", "--porcelain"], { + cwd: worktreeDir, encoding: "utf-8", + }).trim(); + if (status) { + console.warn("[pipeline] Review agents modified worktree unexpectedly, resetting"); + execFileSync("git", ["reset", "--hard", "HEAD"], { cwd: worktreeDir, stdio: "ignore" }); + execFileSync("git", ["clean", "-fd"], { cwd: worktreeDir, stdio: "ignore" }); + } + } catch (err) { + console.error("[pipeline] ensureWorktreeClean failed:", err); + } +} + +/** + * Auto-commit any uncommitted changes left behind by a phase. + * Prevents ensureWorktreeClean() from wiping real implementation work. + * Returns true if an auto-commit was created, false if worktree was already clean. + */ +export function autoCommitUncommittedChanges(worktreeDir: string, commitMessage: string): boolean { + try { + const status = execFileSync("git", ["status", "--porcelain"], { + cwd: worktreeDir, encoding: "utf-8", + }).trim(); + + if (!status) return false; + + // Porcelain format: 2-char status prefix + space + path (e.g., "?? file.txt", " M file.txt") + const lines = status.split("\n").filter(Boolean); + console.warn(`[pipeline] Auto-committing ${lines.length} uncommitted changes:`); + for (const l of lines) console.warn(` ${l}`); + + // Stage tracked file modifications + execFileSync("git", ["add", "-u"], { cwd: worktreeDir, stdio: "ignore" }); + + // Stage genuinely new files, skipping secrets/artifacts + const untracked = lines.filter(l => l.startsWith("??")); + const toStage: string[] = []; + for (const line of untracked) { + // Porcelain format: path starts at index 3. Git quotes paths with spaces/unicode. + let filePath = line.slice(3); + if (filePath.startsWith('"') && filePath.endsWith('"')) { + filePath = filePath.slice(1, -1).replace(/\\"/g, '"').replace(/\\\\/g, "\\"); + } + if (SENSITIVE_FILE_PATTERN.test(filePath)) { + console.warn(`[pipeline] Skipping suspicious file: ${filePath}`); + continue; + } + toStage.push(filePath); + } + if (toStage.length) { + execFileSync("git", ["add", "--", ...toStage], { cwd: worktreeDir, stdio: "ignore" }); + } + + execFileSync("git", ["commit", "-m", commitMessage], { + cwd: worktreeDir, encoding: "utf-8", + }); + return true; + } catch (err) { + console.error("[pipeline] autoCommitUncommittedChanges failed:", err); + // Unstage to leave worktree in a predictable state for retry + try { execFileSync("git", ["reset", "HEAD"], { cwd: worktreeDir, stdio: "ignore" }); } catch { /* ignore */ } + return false; + } +} + +/** Check whether the branch has any commits beyond the base branch. */ +export function hasBranchCommits(worktreeDir: string, baseBranch: string): boolean { + try { + const log = execFileSync("git", ["log", `${baseBranch}..HEAD`, "--oneline"], { + cwd: worktreeDir, encoding: "utf-8", + }).trim(); + return log.length > 0; + } catch (err) { + console.error(`[pipeline] Cannot compare against base branch '${baseBranch}':`, err); + return false; + } +} + +/** Create a fresh planning session with a new UUID. Updates planningSessionId in DB. */ +export async function createFreshPlanningSession( + workdir: string, + prompt: string, + issueId: string, +): Promise<{ result: PipelinePhaseResult; sessionId: string }> { + const sessionId = crypto.randomUUID(); + const result = await runClaudePhase({ + workdir, + prompt, + systemPrompt: "You are an expert implementation planner. Create detailed, actionable plans.", + timeoutMs: PHASE_TIMEOUT_MS, + sessionId, + }); + await db.update(issues).set({ planningSessionId: sessionId, updatedAt: new Date() }) + .where(eq(issues.id, issueId)); + return { result, sessionId }; +} diff --git a/src/lib/issues/pipeline/orchestrator.ts b/src/lib/issues/pipeline/orchestrator.ts new file mode 100644 index 0000000..278b857 --- /dev/null +++ b/src/lib/issues/pipeline/orchestrator.ts @@ -0,0 +1,685 @@ +import { existsSync, mkdirSync } from "node:fs"; +import { join } from "node:path"; +import { execFileSync } from "node:child_process"; +import { db } from "@/lib/db"; +import { issues, issueMessages, repositories } from "@/lib/db/schema"; +import { getIssueAttachments } from "../attachments"; +import { eq } from "drizzle-orm"; +import { escapeHtml, sendTelegramMessageWithId } from "@/lib/notifications/telegram"; +import type { IssuesTransportConfig, PipelinePhaseResult } from "../types"; +import { + PHASE_STATUS_MAP, MAX_PLAN_ITERATIONS, MAX_CODE_REVIEW_ITERATIONS, + PHASE_TIMEOUT_MS, IMPL_TIMEOUT_MS, +} from "../types"; +import { runClaudePhase, isResumeSupported, MAX_FALLBACK_CHARS } from "./claude-runner"; +import { + updatePhase, failIssue, isCancelled, sendIssueTransportMessage, notify, + handleQuestions, getUserAnswers, settledResult, + ensureWorktreeClean, autoCommitUncommittedChanges, hasBranchCommits, + createFreshPlanningSession, +} from "./helpers"; +import { + buildFullPlanningPrompt, buildResumePlanningPrompt, + buildAdversarialReviewPrompt, buildCompletenessReviewPrompt, buildPlanFixPrompt, + buildImplementationPrompt, + buildBugsLogicReviewPrompt, buildSecurityEdgeCasesReviewPrompt, buildDesignPerformanceReviewPrompt, + buildCodeFixPrompt, + buildPrCreationPrompt, +} from "./prompts"; + +/** Build the default worktree directory path under `.claude/worktrees/`. */ +export function buildWorktreePath(repoPath: string, slug: string, shortId: string): string { + return join(repoPath, ".claude", "worktrees", `${slug}-${shortId}`); +} + +export async function runIssuePipeline( + issueId: string, + transportConfig: IssuesTransportConfig +): Promise { + const [issue] = await db.select().from(issues).where(eq(issues.id, issueId)); + if (!issue) throw new Error(`Issue ${issueId} not found`); + + const [repo] = await db.select().from(repositories).where(eq(repositories.id, issue.repositoryId)); + if (!repo) throw new Error(`Repository not found for issue ${issueId}`); + + // Pre-flight: verify repo exists and is a git repo + if (!existsSync(repo.localRepoPath)) { + await failIssue(issueId, `Repository path does not exist: ${repo.localRepoPath}`); + return; + } + try { + execFileSync("git", ["rev-parse", "--git-dir"], { cwd: repo.localRepoPath, stdio: "ignore" }); + } catch { + await failIssue(issueId, `Not a git repository: ${repo.localRepoPath}`); + return; + } + + // Pre-flight: verify gh CLI is available + try { + execFileSync("gh", ["auth", "status"], { cwd: repo.localRepoPath, stdio: "ignore" }); + } catch { + await failIssue(issueId, "gh CLI not authenticated. Run: gh auth login"); + return; + } + + // Create or reuse worktree + const slug = issue.title.toLowerCase().replace(/[^a-z0-9]+/g, "-").substring(0, 40); + const shortId = issue.id.substring(0, 8); + let branchName = issue.branchName || `issue/${slug}-${shortId}`; + let worktreeDir = issue.worktreePath || buildWorktreePath(repo.localRepoPath, slug, shortId); + + // Skip worktree creation if it already exists (retry/resume scenario) + if (!existsSync(worktreeDir)) { + mkdirSync(join(repo.localRepoPath, ".claude", "worktrees"), { recursive: true }); + + // Fetch latest default branch so worktree starts from current remote code + try { + execFileSync("git", ["fetch", "origin", repo.defaultBranch], { + cwd: repo.localRepoPath, stdio: "ignore", timeout: 30_000, + }); + } catch { + console.warn(`[pipeline] Could not fetch latest ${repo.defaultBranch} — will use last-known origin/${repo.defaultBranch}`); + } + + try { + execFileSync("git", ["worktree", "add", worktreeDir, "-b", branchName, `origin/${repo.defaultBranch}`], { + cwd: repo.localRepoPath, stdio: "ignore", + }); + } catch { + try { + execFileSync("git", ["worktree", "add", worktreeDir, branchName], { + cwd: repo.localRepoPath, stdio: "ignore", + }); + } catch (e) { + await failIssue(issueId, `Failed to create worktree: ${e}`); + return; + } + } + } + + const phaseSessionIds: Record = issue.phaseSessionIds as Record || {}; + + // Determine start phase (resume support) + const startPhase = issue.currentPhase > 0 ? issue.currentPhase : 1; + + // Check if --resume is supported (cached in appSettings, globalThis for HMR) + const resumeSupported = await isResumeSupported(); + + // Living planning session: created in Phase 1 iter 1, resumed across iterations + Phase 4 + let planningSessionId = issue.planningSessionId || crypto.randomUUID(); + let isFirstPlanRun = !issue.planningSessionId; // true = --session-id (create), false = --resume + + // Defer planningSessionId write until after first successful phase (avoids stale UUID on early failure) + await db.update(issues).set({ + worktreePath: worktreeDir, + branchName, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + + try { + // ── Phases 1-3: Planning + Reviews ───────────────────── + // Guard covers phases 1-3 since they're part of the planning loop + if (startPhase <= 3) { + if (await isCancelled(issueId)) return; + await updatePhase(issueId, 1, "planning"); + await notify(issueId, transportConfig, `Planning started for: ${escapeHtml(issue.title)}`); + + let planOutput = ""; + let planIterations = 0; + let planApproved = false; + let skipPlanning = false; // Set after plan-fix to go directly to re-review + const priorPlanFindings: string[] = []; // Accumulated findings from previous review rounds + + while (!planApproved && planIterations < MAX_PLAN_ITERATIONS) { + if (!skipPlanning) { + // Hoist DB queries above the branching logic (avoids duplication) + const [currentIssue] = await db.select().from(issues).where(eq(issues.id, issueId)); + const userAnswers = await getUserAnswers(issueId); + // Re-query attachments each iteration (user may add photos via Q&A replies) + const attachments = await getIssueAttachments(issueId); + const attachmentPaths = attachments.map(a => a.filePath); + + // Build the full prompt (used for fresh sessions and as fallback) + const freshPrompt = buildFullPlanningPrompt( + issue.description, planOutput, currentIssue?.planReview1, currentIssue?.planReview2, userAnswers, attachmentPaths, + ); + + // Run Phase 1 — create, resume, or fresh fallback + let planResult: PipelinePhaseResult; + + if (isFirstPlanRun) { + // CREATE the planning session + planResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: freshPrompt, + systemPrompt: "You are an expert implementation planner. Create detailed, actionable plans.", + timeoutMs: PHASE_TIMEOUT_MS, + sessionId: planningSessionId, + }); + isFirstPlanRun = false; + } else if (resumeSupported) { + // RESUME the planning session (keeps exploration context!) + const resumePrompt = buildResumePlanningPrompt( + currentIssue?.planReview1, currentIssue?.planReview2, userAnswers, attachmentPaths, + ); + planResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: resumePrompt, + timeoutMs: PHASE_TIMEOUT_MS, + resumeSessionId: planningSessionId, + }); + + // If resume failed (not timeout), fall back to fresh session with full context + if (!planResult.success && !planResult.timedOut) { + console.log("[pipeline] Planning resume failed, falling back to fresh session"); + const fresh = await createFreshPlanningSession(worktreeDir, freshPrompt, issueId); + planResult = fresh.result; + planningSessionId = fresh.sessionId; + } + } else { + // Resume not supported — fresh session each iteration (current behavior) + const fresh = await createFreshPlanningSession(worktreeDir, freshPrompt, issueId); + planResult = fresh.result; + planningSessionId = fresh.sessionId; + } + + if (!planResult.success) { + await failIssue(issueId, `Planning failed: ${planResult.output.substring(0, 2000)}`); + return; + } + + // Store iteration-indexed session IDs (keep "1" pointing to latest for CLI resume) + const planIterKey = planIterations > 0 ? `.${planIterations + 1}` : ""; + if (planResult.sessionId) phaseSessionIds[`1${planIterKey}`] = planResult.sessionId; + phaseSessionIds["1"] = planResult.sessionId!; + planOutput = planResult.output; + await db.update(issues).set({ + planOutput, + planningSessionId, + phaseSessionIds, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + + // Handle questions + if (planResult.hasQuestions && planResult.questions) { + const answered = await handleQuestions(issueId, planResult.questions, transportConfig); + if (!answered) { + await failIssue(issueId, "Timed out waiting for user reply to questions"); + return; + } + continue; + } + } else { + skipPlanning = false; + } + + // Count this as a plan iteration (questions don't consume iterations) + planIterations++; + + // ── Phase 2: Plan Verification (2 reviewers in parallel) ── + if (await isCancelled(issueId)) return; + await updatePhase(issueId, 2, "reviewing_plan_1"); + await notify(issueId, transportConfig, `Plan verification started (2 reviewers in parallel)`); + + const priorFindingsText = priorPlanFindings.length > 0 + ? priorPlanFindings.join("\n\n========================================\n\n") + .substring(0, MAX_FALLBACK_CHARS) + : undefined; + + const planReviewResults = await Promise.allSettled([ + runClaudePhase({ + workdir: worktreeDir, + prompt: buildAdversarialReviewPrompt(planOutput, priorFindingsText), + systemPrompt: "You are an adversarial plan reviewer. Find problems, not validate.", + timeoutMs: PHASE_TIMEOUT_MS, + }), + runClaudePhase({ + workdir: worktreeDir, + prompt: buildCompletenessReviewPrompt(planOutput, priorFindingsText), + systemPrompt: "You are a completeness and feasibility reviewer. Find gaps.", + timeoutMs: PHASE_TIMEOUT_MS, + }), + ]); + const review1Result = settledResult(planReviewResults[0]); + const review2Result = settledResult(planReviewResults[1]); + + // Store iteration-indexed review session IDs (keep "2"/"3" pointing to latest for CLI resume) + const reviewIterKey = planIterations > 1 ? `.${planIterations}` : ""; + if (review1Result.sessionId) phaseSessionIds[`2${reviewIterKey}`] = review1Result.sessionId; + if (review2Result.sessionId) phaseSessionIds[`3${reviewIterKey}`] = review2Result.sessionId; + if (review1Result.sessionId) phaseSessionIds["2"] = review1Result.sessionId; + if (review2Result.sessionId) phaseSessionIds["3"] = review2Result.sessionId; + // Accumulate reviews across iterations (prefix with round number for context) + const roundReview1 = planIterations > 1 + ? `# Plan Review Round ${planIterations} - Adversarial\n${review1Result.output}` + : review1Result.output; + const roundReview2 = planIterations > 1 + ? `# Plan Review Round ${planIterations} - Completeness\n${review2Result.output}` + : review2Result.output; + + const [prevIssue] = await db.select({ + pr1: issues.planReview1, + pr2: issues.planReview2, + }).from(issues).where(eq(issues.id, issueId)); + + // Newest round first so truncation drops stale rounds, not the latest + const accumulatedReview1 = planIterations === 1 + ? roundReview1 + : (roundReview1 + "\n\n========================================\n\n" + (prevIssue?.pr1 || "")) + .substring(0, MAX_FALLBACK_CHARS); + const accumulatedReview2 = planIterations === 1 + ? roundReview2 + : (roundReview2 + "\n\n========================================\n\n" + (prevIssue?.pr2 || "")) + .substring(0, MAX_FALLBACK_CHARS); + + await db.update(issues).set({ + planReview1: accumulatedReview1, + planReview2: accumulatedReview2, + phaseSessionIds, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + + // Check if EITHER reviewer found CRITICAL issues (VERDICT: FAIL) + const review1Failed = /VERDICT:\s*FAIL/i.test(review1Result.output); + const review2Failed = /VERDICT:\s*FAIL/i.test(review2Result.output); + + if (review1Failed || review2Failed) { + // Accumulate findings for subsequent review rounds + const roundFindings = [ + review1Failed ? `### Round ${planIterations} - Adversarial Review CRITICALs\n${review1Result.output}` : "", + review2Failed ? `### Round ${planIterations} - Completeness Review CRITICALs\n${review2Result.output}` : "", + ].filter(Boolean).join("\n\n"); + priorPlanFindings.push(roundFindings); + + if (planIterations >= MAX_PLAN_ITERATIONS) break; + if (await isCancelled(issueId)) return; + + // ── Plan Fix: surgically address review findings ── + await notify(issueId, transportConfig, + `Plan review round ${planIterations} failed. Fixing plan before attempt ${planIterations + 1}...` + ); + + const priorFindingsForFix = priorPlanFindings.length > 1 + ? priorPlanFindings.slice(0, -1).join("\n\n") + : undefined; + const capPerInput = Math.floor(MAX_FALLBACK_CHARS / (priorFindingsForFix ? 4 : 3)) - 500; + const fixPrompt = buildPlanFixPrompt( + planOutput.substring(0, capPerInput), + review1Result.output.substring(0, capPerInput), + review2Result.output.substring(0, capPerInput), + priorFindingsForFix?.substring(0, capPerInput), + ); + + // Always use a fresh session for fixes — resumed sessions respond + // conversationally and fail to produce structured plan output + const fixResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: fixPrompt, + systemPrompt: "You are an expert plan fixer. Surgically revise the plan to address all review findings. Output ONLY the complete revised plan text with no commentary.", + timeoutMs: PHASE_TIMEOUT_MS, + }); + + // Store fix session ID for debugging + if (fixResult.sessionId) { + phaseSessionIds[`fix.${planIterations}`] = fixResult.sessionId; + await db.update(issues).set({ phaseSessionIds, updatedAt: new Date() }) + .where(eq(issues.id, issueId)); + } + console.log(`[pipeline] Plan fix iteration ${planIterations} (session ${fixResult.sessionId}): success=${fixResult.success}, output=${fixResult.output.length} chars`); + + if (fixResult.success && fixResult.output.trim()) { + // Accept the fix output as the new plan — the next review round is the quality gate + planOutput = fixResult.output + .replace(/\n*VERDICT:\s*(READY|PASS|FAIL)[^\n]*/gi, "") + .trim(); + await db.update(issues).set({ + planOutput, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + console.log(`[pipeline] Plan updated from fix (iteration ${planIterations}), ${planOutput.length} chars`); + skipPlanning = true; // Skip planning, go straight to re-review + } else { + console.warn(`[pipeline] Plan fix failed (success=${fixResult.success}). Falling back to re-planning.`); + // Don't set skipPlanning — let the next iteration re-run planning with review feedback + } + + continue; + } + + planApproved = true; + } + + if (!planApproved) { + await failIssue(issueId, `Plan could not pass review after ${MAX_PLAN_ITERATIONS} attempts`); + await notify(issueId, transportConfig, `Planning failed after ${MAX_PLAN_ITERATIONS} attempts for: ${escapeHtml(issue.title)}`); + return; + } + + await notify(issueId, transportConfig, `Plan approved. Starting implementation...`); + } + + // ── Phase 4: Implementation (resume planning session if possible) ── + if (startPhase <= 4) { + if (await isCancelled(issueId)) return; + await updatePhase(issueId, 4, "implementing"); + + const [currentIssue] = await db.select().from(issues).where(eq(issues.id, issueId)); + // Re-query attachments (user may have added photos via Q&A replies since planning) + const implAttachments = await getIssueAttachments(issueId); + const implAttachmentPaths = implAttachments.map(a => a.filePath); + let implPrompt = buildImplementationPrompt( + currentIssue?.planOutput || "", + currentIssue?.planReview1 || "", + currentIssue?.planReview2 || "", + implAttachmentPaths, + ); + const userAnswers = await getUserAnswers(issueId); + if (userAnswers) { + implPrompt += `\n\n## Additional Context from User\n${userAnswers}`; + } + + // Resume the planning session if the session exists and --resume is supported. + const canResume = ( + startPhase <= 4 && + currentIssue?.planningSessionId && + resumeSupported + ); + + let implResult: Awaited>; + + if (canResume) { + implResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: implPrompt, + timeoutMs: IMPL_TIMEOUT_MS, + resumeSessionId: currentIssue!.planningSessionId!, + }); + + // If resume failed (not timeout), retry with fresh session + if (!implResult.success && !implResult.timedOut) { + console.log("[pipeline] Implementation resume failed, retrying with fresh session"); + implResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: implPrompt, + systemPrompt: "You are an expert software engineer. Implement the plan precisely.", + timeoutMs: IMPL_TIMEOUT_MS, + }); + } + } else { + // Fresh session (crash recovery, retry, or resume not supported) + implResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: implPrompt, + systemPrompt: "You are an expert software engineer. Implement the plan precisely.", + timeoutMs: IMPL_TIMEOUT_MS, + }); + } + + if (!implResult.success) { + await failIssue(issueId, `Implementation failed: ${implResult.output.substring(0, 2000)}`); + return; + } + + phaseSessionIds["4"] = implResult.sessionId!; + await db.update(issues).set({ phaseSessionIds, updatedAt: new Date() }).where(eq(issues.id, issueId)); + + // ── Commit gate: ensure implementation actually committed ── + autoCommitUncommittedChanges(worktreeDir, + "feat: implement changes\n\nAuto-committed by pipeline — implementation phase did not commit."); + if (!hasBranchCommits(worktreeDir, repo.defaultBranch)) { + await failIssue(issueId, "Implementation produced no changes — no commits found beyond base branch."); + return; + } + + await notify(issueId, transportConfig, `Implementation complete. Starting code review...`); + } + + // ── Phases 5-6: Adversarial Code Review + Auto-Fix Loop ── + if (startPhase <= 6) { + let codeApproved = false; + let crIterations = 0; + + while (!codeApproved && crIterations < MAX_CODE_REVIEW_ITERATIONS) { + crIterations++; + + // ── Phase 5: 3 specialist reviewers in parallel (READ-ONLY) ── + if (await isCancelled(issueId)) return; + await updatePhase(issueId, 5, "reviewing_code_1"); + await notify(issueId, transportConfig, + `Code review round ${crIterations}/${MAX_CODE_REVIEW_ITERATIONS} (3 specialist reviewers)` + ); + + const codeReviewResults = await Promise.allSettled([ + runClaudePhase({ + workdir: worktreeDir, + prompt: buildBugsLogicReviewPrompt(repo.defaultBranch), + systemPrompt: "You are a bugs & logic reviewer. DO NOT modify files.", + timeoutMs: PHASE_TIMEOUT_MS, + }), + runClaudePhase({ + workdir: worktreeDir, + prompt: buildSecurityEdgeCasesReviewPrompt(repo.defaultBranch), + systemPrompt: "You are a security reviewer. DO NOT modify files.", + timeoutMs: PHASE_TIMEOUT_MS, + }), + runClaudePhase({ + workdir: worktreeDir, + prompt: buildDesignPerformanceReviewPrompt(repo.defaultBranch), + systemPrompt: "You are a design & performance reviewer. DO NOT modify files.", + timeoutMs: PHASE_TIMEOUT_MS, + }), + ]); + const bugsResult = settledResult(codeReviewResults[0]); + const securityResult = settledResult(codeReviewResults[1]); + const designResult = settledResult(codeReviewResults[2]); + + // Verify reviewers didn't modify the worktree + ensureWorktreeClean(worktreeDir); + + // Combine reviews with per-reviewer caps to stay under MAX_FALLBACK_CHARS + const capPerReviewer = Math.floor(MAX_FALLBACK_CHARS / 3) - 200; + const roundReview = [ + `# Code Review Round ${crIterations}`, + "## Bugs & Logic Review\n" + bugsResult.output.substring(0, capPerReviewer), + "## Security & Edge Cases Review\n" + securityResult.output.substring(0, capPerReviewer), + "## Design & Performance Review\n" + designResult.output.substring(0, capPerReviewer), + ].join("\n\n---\n\n"); + + // Accumulate reviews across iterations (don't overwrite prior rounds) + const [prevIssue] = await db.select({ cr1: issues.codeReview1 }).from(issues).where(eq(issues.id, issueId)); + const accumulatedReview = crIterations === 1 + ? roundReview + : ((prevIssue?.cr1 || "") + "\n\n========================================\n\n" + roundReview).substring(0, MAX_FALLBACK_CHARS); + + // Store all 3 reviewer session IDs with iteration indexing + const crIterKey = crIterations > 1 ? `.${crIterations}` : ""; + if (bugsResult.sessionId) phaseSessionIds[`5a${crIterKey}`] = bugsResult.sessionId; + if (securityResult.sessionId) phaseSessionIds[`5b${crIterKey}`] = securityResult.sessionId; + if (designResult.sessionId) phaseSessionIds[`5c${crIterKey}`] = designResult.sessionId; + // Keep "5" pointing to latest for CLI resume + if (bugsResult.sessionId) phaseSessionIds["5"] = bugsResult.sessionId; + await db.update(issues).set({ + codeReview1: accumulatedReview, + phaseSessionIds, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + + // Check if all reviewers passed + const anyFailed = [bugsResult, securityResult, designResult].some( + r => /VERDICT:\s*FAIL/i.test(r.output) + ); + + if (!anyFailed) { + codeApproved = true; + await notify(issueId, transportConfig, `All code reviews passed!`); + break; + } + + if (crIterations >= MAX_CODE_REVIEW_ITERATIONS) break; + + // ── Phase 6: Auto-fix all issues ── + if (await isCancelled(issueId)) return; + await updatePhase(issueId, 6, "reviewing_code_2"); + await notify(issueId, transportConfig, + `Fixing code review findings (round ${crIterations}/${MAX_CODE_REVIEW_ITERATIONS})...` + ); + + // Track HEAD before fix for convergence detection + let headBefore = ""; + try { + headBefore = execFileSync("git", ["rev-parse", "HEAD"], { + cwd: worktreeDir, encoding: "utf-8", + }).trim(); + } catch { /* ignore */ } + + const fixResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: buildCodeFixPrompt( + repo.defaultBranch, + bugsResult.output, + securityResult.output, + designResult.output, + ), + systemPrompt: "You are an expert software engineer. Fix all identified issues.", + timeoutMs: IMPL_TIMEOUT_MS, + }); + + // Accumulate fix outputs across iterations + const [prevFix] = await db.select({ cr2: issues.codeReview2 }).from(issues).where(eq(issues.id, issueId)); + const fixOutput = `# Fix Round ${crIterations}\n${fixResult.output}`; + const accumulatedFixes = crIterations === 1 + ? fixOutput + : ((prevFix?.cr2 || "") + "\n\n========================================\n\n" + fixOutput).substring(0, MAX_FALLBACK_CHARS); + + // Store iteration-indexed fix session IDs + const fixIterKey = crIterations > 1 ? `.${crIterations}` : ""; + if (fixResult.sessionId) phaseSessionIds[`6${fixIterKey}`] = fixResult.sessionId; + // Keep "6" pointing to latest for CLI resume + if (fixResult.sessionId) phaseSessionIds["6"] = fixResult.sessionId; + await db.update(issues).set({ + codeReview2: accumulatedFixes, + phaseSessionIds, + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + + if (!fixResult.success) { + await failIssue(issueId, `Code fix failed: ${fixResult.output.substring(0, 2000)}`); + return; + } + + // Convergence check: did the fix agent make any commits? + try { + const headAfter = execFileSync("git", ["rev-parse", "HEAD"], { + cwd: worktreeDir, encoding: "utf-8", + }).trim(); + if (headBefore && headBefore === headAfter) { + autoCommitUncommittedChanges(worktreeDir, + "fix: address code review findings\n\nAuto-committed by pipeline — fix phase did not commit."); + await notify(issueId, transportConfig, `Fix agent made no new commits. Stopping review loop.`); + break; + } + } catch { /* ignore */ } + + // Auto-commit any remaining uncommitted changes from the fix agent + autoCommitUncommittedChanges(worktreeDir, + "fix: address code review findings\n\nAuto-committed by pipeline — fix phase did not commit."); + + await notify(issueId, transportConfig, `Fixes applied. Re-reviewing...`); + } + + if (!codeApproved) { + await notify(issueId, transportConfig, + `Code review reached max iterations (${MAX_CODE_REVIEW_ITERATIONS}). Proceeding to PR.` + ); + } + } + + // ── Phase 7: PR Creation ─────────────────────────────── + if (startPhase <= 7) { + if (await isCancelled(issueId)) return; + await updatePhase(issueId, 7, "creating_pr"); + + const prAttachments = await getIssueAttachments(issueId); + const prAttachmentPaths = prAttachments.map(a => a.filePath); + const prResult = await runClaudePhase({ + workdir: worktreeDir, + prompt: buildPrCreationPrompt(issue.title, issue.description, repo.defaultBranch, prAttachmentPaths), + systemPrompt: "Create a pull request using the gh CLI.", + timeoutMs: PHASE_TIMEOUT_MS, + }); + + phaseSessionIds["7"] = prResult.sessionId!; + + if (!prResult.success) { + await db.update(issues).set({ phaseSessionIds, status: "failed", error: `PR creation failed: ${prResult.output.substring(0, 2000)}`, updatedAt: new Date() }).where(eq(issues.id, issueId)); + await notify(issueId, transportConfig, `PR creation failed for: ${escapeHtml(issue.title)}\n${escapeHtml(prResult.output.substring(0, 200))}`); + return; + } + + const prUrlMatch = prResult.output.match(/https:\/\/github\.com\/[\w.\-]+\/[\w.\-]+\/pull\/\d+/); + const prUrl = prUrlMatch?.[0] || null; + + if (!prUrl) { + await db.update(issues).set({ phaseSessionIds, status: "failed", error: `PR creation succeeded but no PR URL found in output. Claude may have failed to push or create the PR.\n\nOutput (truncated): ${prResult.output.substring(0, 2000)}`, updatedAt: new Date() }).where(eq(issues.id, issueId)); + await notify(issueId, transportConfig, `PR creation failed for: ${escapeHtml(issue.title)}\nNo PR URL found in Claude output.`); + return; + } + + // Fetch PR summary from GitHub (the PR body Claude wrote via gh pr create) + let prSummary = prResult.output; + try { + const prJson = execFileSync("gh", ["pr", "view", prUrl, "--json", "title,body"], { + cwd: repo.localRepoPath, + encoding: "utf-8", + timeout: 15000, + }); + const prData = JSON.parse(prJson); + if (prData.body) { + prSummary = prData.body.substring(0, MAX_FALLBACK_CHARS); + } + } catch { + // Fallback: keep raw Claude output as prSummary + } + + await db.update(issues).set({ + status: "completed", + prUrl, + prStatus: "open", + prSummary, + phaseSessionIds, + completedAt: new Date(), + updatedAt: new Date(), + }).where(eq(issues.id, issueId)); + + // Send completion message and store it in issueMessages so the user + // can reply to continue the conversation in the same Claude session + const completionHtml = `Issue completed: ${escapeHtml(issue.title)}\nPR: ${escapeHtml(prUrl)}\n\nReply to this message to continue the conversation.`; + const completionPlain = `Issue completed: ${issue.title}\nPR: ${prUrl}\n\nReply to this message to continue the conversation.`; + try { + if (transportConfig.kind === "telegram") { + const msgId = await sendTelegramMessageWithId(transportConfig, completionHtml); + await db.insert(issueMessages).values({ + issueId, + direction: "from_claude", + message: completionPlain, + telegramMessageId: msgId, + }); + } else { + const result = await sendIssueTransportMessage(issueId, transportConfig, completionPlain); + await db.insert(issueMessages).values({ + issueId, + direction: "from_claude", + message: completionPlain, + slackMessageTs: result.slackTs, + }); + } + } catch (err) { + console.error("[pipeline] Failed to send completion notification:", err); + } + } + + } catch (err) { + await failIssue(issueId, String(err)); + await notify(issueId, transportConfig, `Pipeline failed for: ${escapeHtml(issue.title)}\nError: ${escapeHtml(String(err).substring(0, 200))}`); + } +} diff --git a/src/lib/issues/pipeline/prompts.ts b/src/lib/issues/pipeline/prompts.ts new file mode 100644 index 0000000..b37319b --- /dev/null +++ b/src/lib/issues/pipeline/prompts.ts @@ -0,0 +1,372 @@ +// ── Prompt builders ────────────────────────────────────────── +// Pure functions that generate prompt strings for each pipeline phase. +// Extracted from pipeline.ts — no imports needed. + +/** Build a prompt for resumed planning sessions (only new context, no duplicate planning prompt). */ +export function buildResumePlanningPrompt( + reviewFeedback: string | null | undefined, + completenessReview: string | null | undefined, + userAnswers: string | null, + attachmentPaths: string[] = [], +): string { + const attachmentReminder = attachmentPaths.length > 0 + ? `\n\n## Attached Images (still available)\nUse the Read tool to view these images for visual context:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}\n` + : ""; + + if (reviewFeedback) { + return `Your previous plan was reviewed and found to have issues. Create a REVISED plan addressing all feedback below. + +## Review Feedback +${reviewFeedback} +${completenessReview ? `\n## Completeness Review Feedback\n${completenessReview}` : ""} +${userAnswers ? `\n## User's Answers to Your Questions\n${userAnswers}` : ""} +${attachmentReminder} +Revise your implementation plan to address all the review feedback. Include the "## Codebase Analysis" section again. +End with "VERDICT: READY" or "## Questions" if you need more information.`; + } + if (userAnswers) { + return `Here are the answers to your questions: + +${userAnswers} +${attachmentReminder} +Please update your implementation plan based on these answers. Include the "## Codebase Analysis" section. +End with "VERDICT: READY" or "## Questions" if you need more information.`; + } + // Resuming after crash with no new context — ask to continue + return `Continue your implementation plan where you left off. Include the "## Codebase Analysis" section. +${attachmentReminder} +End with "VERDICT: READY" or "## Questions" if you need more information.`; +} + +/** Build a full planning prompt with all available context (for fresh sessions). */ +export function buildFullPlanningPrompt( + description: string, + planOutput: string, + reviewFeedback: string | null | undefined, + completenessReview: string | null | undefined, + userAnswers: string | null, + attachmentPaths: string[] = [], +): string { + let prompt = buildPlanningPrompt(description, attachmentPaths); + if (planOutput && reviewFeedback) { + prompt += `\n\n## Previous Plan Review Feedback\n${reviewFeedback}`; + } + if (planOutput && completenessReview) { + prompt += `\n\n## Completeness Review Feedback\n${completenessReview}`; + } + if (userAnswers) { + prompt += `\n\n## User's Answers to Questions\n${userAnswers}`; + } + return prompt; +} + +export function buildPlanningPrompt(description: string, attachmentPaths: string[] = []): string { + const attachmentSection = attachmentPaths.length > 0 + ? `\n\n## Attached Images\nThe following images were provided with this issue. Use the Read tool to view them for visual context:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}` + : ""; + + return `You are tasked with creating a detailed implementation plan for the following issue. + +## Issue Description +${description} +${attachmentSection} + +## Instructions +1. Analyze the codebase to understand the existing architecture and patterns +2. Create a step-by-step implementation plan +3. Identify files that need to be created or modified +4. Note any potential risks or edge cases +5. If you have questions that would significantly affect the plan, add a "## Questions" section at the end + +## Output Format +Provide a structured plan with: +- Overview of the approach +- Detailed steps with file paths +- Any new dependencies needed +- Testing strategy + +**Important**: Include a "## Codebase Analysis" section with: +- Key file paths you examined and their purposes +- Relevant code patterns and conventions observed +- Critical code snippets that the implementer must reference +- Architecture notes (how components connect) + +This analysis will be used by the implementation phase, so be thorough. + +End with either: +- "VERDICT: READY" if the plan is complete +- "## Questions" section if you need clarification`; +} + +export function buildAdversarialReviewPrompt(plan: string, priorFindings?: string): string { + const priorSection = priorFindings ? ` +## Prior Review Findings (from previous rounds) +The following CRITICAL issues were found in earlier review rounds. You MUST verify that EACH of these has been addressed in the current plan. If any remain unaddressed, re-list them as CRITICAL. + +${priorFindings} + +` : ""; + + return `You are an adversarial plan reviewer. Your job is to find problems, not validate. + +## Plan to Review +${plan} +${priorSection} +## Instructions +Review this plan for: +1. Security vulnerabilities +2. Missing error handling +3. Race conditions or concurrency issues +4. Incorrect assumptions about the codebase +5. Missing steps or dependencies +6. Breaking changes +${priorFindings ? "7. Verify ALL prior findings listed above have been addressed" : ""} + +For each issue found, classify as: +- CRITICAL: Must be fixed before implementation +- WARNING: Should be addressed but not blocking + +## Output Format +List each issue with its severity, description, and suggested fix. + +End with: +- "VERDICT: PASS" if no CRITICAL issues found +- "VERDICT: FAIL" if CRITICAL issues exist`; +} + +export function buildCompletenessReviewPrompt(plan: string, priorFindings?: string): string { + const priorSection = priorFindings ? ` +## Prior Review Findings (from previous rounds) +The following issues were found in earlier review rounds. You MUST verify that EACH of these has been addressed in the current plan. If any remain unaddressed, re-list them as blocking gaps. + +${priorFindings} + +` : ""; + + return `You are a completeness and feasibility reviewer. + +## Plan +${plan} +${priorSection} +## Instructions +Check the plan for: +1. Missing implementation steps +2. Incorrect assumptions about the existing code +3. Missing test coverage +4. Integration gaps +5. Deployment or migration concerns +${priorFindings ? "6. Verify ALL prior findings listed above have been addressed" : ""} + +For each gap found, classify as: +- MISSING_STEP: A required step is not in the plan +- WRONG_ASSUMPTION: The plan assumes something incorrect about the codebase + +## Output Format +List each finding with classification and description. + +End with: +- "VERDICT: PASS" if the plan is complete and feasible +- "VERDICT: FAIL" if there are blocking gaps`; +} + +export function buildPlanFixPrompt(plan: string, adversarialReview: string, completenessReview: string, priorFindings?: string): string { + const priorSection = priorFindings ? ` +## Previously Identified Issues (from earlier rounds) +These issues were found in earlier review rounds. Ensure they are ALSO addressed in your revision, not just the latest findings. + +${priorFindings} +` : ""; + + return `You are an expert plan fixer. Your job is to surgically revise an implementation plan to address ALL findings from two independent reviewers. + +## Current Plan +${plan} + +## Adversarial Review Findings +${adversarialReview} + +## Completeness Review Findings +${completenessReview} +${priorSection} +## Instructions +1. Read EVERY finding from both reviewers — CRITICAL, WARNING, and NOTE severity +2. For each finding, make a concrete change to the plan that fully addresses it +3. Do NOT rewrite the plan from scratch — preserve all parts that were not flagged +4. If a finding suggests a specific fix, incorporate it directly +5. If two findings conflict, prefer the safer/more correct approach +6. Ensure the revised plan is still coherent and self-consistent after all fixes +${priorFindings ? "7. Also verify that ALL previously identified issues (listed above) remain addressed" : ""} + +## Output Format +Output the COMPLETE revised plan (not just the diffs). The output must be a standalone, clean plan that can be handed directly to an implementer. Do NOT include a changelog, commentary, or summary of what was changed — just output the revised plan text and nothing else.`; +} + +export function buildImplementationPrompt(plan: string, review1: string, review2: string, attachmentPaths: string[] = []): string { + const attachmentSection = attachmentPaths.length > 0 + ? `\n\n## Attached Images\nUse the Read tool to view these images for visual context:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}` + : ""; + + return `Implement the following plan. Follow it precisely, incorporating the review feedback. + +## Implementation Plan +${plan} + +## Review Feedback to Address +### Adversarial Review +${review1} + +### Completeness Review +${review2} +${attachmentSection} + +## Instructions +1. Implement each step of the plan +2. Address all review feedback +3. Write tests for new functionality +4. Ensure all existing tests still pass +5. CRITICAL: You MUST commit all changes before finishing. Run \`git add -A && git commit -m "feat: "\`. Uncommitted changes will be lost. + +Do NOT create a PR — that will be done in a separate step.`; +} + +// ── Specialist code review prompts (READ-ONLY) ────────────── + +export function buildBugsLogicReviewPrompt(defaultBranch: string): string { + return `You are a specialist code reviewer focused on BUGS AND LOGIC ERRORS. +Your job is to FIND defects — do NOT modify any files. + +## Instructions +1. Run \`git diff ${defaultBranch}...HEAD\` to see all changes +2. Read every changed file in full for context +3. For each change, actively try to break it: + - Logic errors, wrong conditions, inverted booleans, off-by-one + - Null/undefined handling gaps + - Race conditions and concurrency bugs + - Missing error handling, swallowed errors + - Boundary conditions (empty, zero, MAX_INT, very large inputs) +4. DO NOT modify any files. You are a READ-ONLY reviewer. + +## Output Format +For each issue found: +- **Severity**: CRITICAL / WARNING / NOTE +- **File**: exact file path and line number +- **Bug**: What's wrong (be specific) +- **Proof**: Input or scenario that triggers the bug +- **Fix**: Suggested code change + +End with: +- "VERDICT: PASS" if no CRITICAL issues found +- "VERDICT: FAIL" if CRITICAL issues exist`; +} + +export function buildSecurityEdgeCasesReviewPrompt(defaultBranch: string): string { + return `You are a specialist code reviewer focused on SECURITY AND EDGE CASES. +Your job is to FIND vulnerabilities — do NOT modify any files. + +## Instructions +1. Run \`git diff ${defaultBranch}...HEAD\` to see all changes +2. Read every changed file in full for context +3. Analyze from an attacker's perspective: + - Injection (SQL, command, XSS, path traversal, SSRF) + - Authentication/authorization bypasses + - Sensitive data exposure in logs, errors, responses + - Input validation gaps (malformed input, special chars, huge strings) + - Denial of service vectors (regex DoS, unbounded queries) + - Edge cases: empty inputs, concurrent requests, partial failures +4. DO NOT modify any files. You are a READ-ONLY reviewer. + +## Output Format +For each issue found: +- **Severity**: CRITICAL / WARNING / NOTE +- **File**: exact file path and line number +- **Vulnerability**: What's the issue +- **Attack scenario**: How to exploit it +- **Fix**: Suggested remediation + +End with: +- "VERDICT: PASS" if no CRITICAL issues found +- "VERDICT: FAIL" if CRITICAL issues exist`; +} + +export function buildDesignPerformanceReviewPrompt(defaultBranch: string): string { + return `You are a specialist code reviewer focused on DESIGN AND PERFORMANCE. +Your job is to FIND design issues — do NOT modify any files. + +## Instructions +1. Run \`git diff ${defaultBranch}...HEAD\` to see all changes +2. Read changed files and related files for context +3. Evaluate: + - Violations of existing code patterns and conventions + - Missing or inadequate test coverage + - API design issues (breaking changes, inconsistent interfaces) + - Performance problems (N+1 queries, unnecessary work, large allocations) + - Code duplication or missing abstractions + - Backwards compatibility concerns +4. DO NOT modify any files. You are a READ-ONLY reviewer. + +## Output Format +For each issue found: +- **Severity**: CRITICAL / WARNING / NOTE +- **File**: exact file path and line number +- **Issue**: What's wrong +- **Impact**: Concrete consequence +- **Fix**: Suggested improvement + +End with: +- "VERDICT: PASS" if no CRITICAL issues found +- "VERDICT: FAIL" if CRITICAL issues exist`; +} + +export function buildCodeFixPrompt( + defaultBranch: string, + bugsReview: string, + securityReview: string, + designReview: string, +): string { + return `Fix ALL issues identified by the code reviewers below. + +## Review Findings + +### Bugs & Logic Review +${bugsReview} + +### Security & Edge Cases Review +${securityReview} + +### Design & Performance Review +${designReview} + +## Instructions +1. Run \`git diff ${defaultBranch}...HEAD\` to see current changes +2. Fix every CRITICAL finding listed above +3. Fix WARNING findings where the fix is straightforward +4. Run tests after each fix to ensure no regressions +5. CRITICAL: You MUST commit all fixes before finishing. Run \`git add -A && git commit -m "fix: "\`. Uncommitted changes will be lost. +6. Do NOT create a PR + +End with: +- "VERDICT: FIXED" if all CRITICAL issues were addressed +- "VERDICT: PARTIAL" if some could not be fixed (explain why)`; +} + +export function buildPrCreationPrompt(title: string, description: string, defaultBranch: string, attachmentPaths: string[] = []): string { + const attachmentSection = attachmentPaths.length > 0 + ? `\n\n## Attached Images\nUse the Read tool to view these images for visual context when writing the PR description:\n${attachmentPaths.map(p => `- ${p}`).join("\n")}` + : ""; + + return `Create a pull request for the changes on this branch. + +## Issue Details +Title: ${title} +Description: ${description} +${attachmentSection} + +## Instructions +1. Push the current branch to the remote +2. Create a PR using \`gh pr create\` targeting ${defaultBranch} +3. Use a descriptive title based on the issue +4. Include a summary of changes in the PR body +5. Include the issue description for context + +Output the PR URL when done.`; +} diff --git a/src/lib/issues/poller-manager.ts b/src/lib/issues/poller-manager.ts index f5823e0..2db98a9 100644 --- a/src/lib/issues/poller-manager.ts +++ b/src/lib/issues/poller-manager.ts @@ -1,8 +1,9 @@ import { db } from "@/lib/db"; -import { issues, notificationConfigs } from "@/lib/db/schema"; +import { issues } from "@/lib/db/schema"; import { eq, and, or, isNull, isNotNull, lt, sql } from "drizzle-orm"; import { getIssuesTelegramConfig, pollTelegramUpdates, processTelegramUpdate } from "./telegram-poller"; import { runIssuePipeline } from "./pipeline"; +import { getNotificationConfig, upsertNotificationConfig } from "@/lib/db/notification-config"; import type { IssuesTelegramConfig, IssuesTransportConfig } from "./types"; // Stale lock threshold: 4 hours (covers worst-case pipeline: 3 plan iterations + impl + reviews + QA waits) @@ -55,26 +56,13 @@ export function ensurePollerRunning(): void { // ── Shared poller logic (used by both in-process and standalone script) ── -export async function getOffset(): Promise { - const [row] = await db.select().from(notificationConfigs) - .where(eq(notificationConfigs.channel, "telegram-issues-offset")).limit(1); +export function getOffset(): number { + const row = getNotificationConfig("telegram-issues-offset"); return row ? parseInt((row.config as Record).offset || "0") : 0; } -export async function setOffset(offset: number) { - const [existing] = await db.select().from(notificationConfigs) - .where(eq(notificationConfigs.channel, "telegram-issues-offset")).limit(1); - if (existing) { - await db.update(notificationConfigs) - .set({ config: { offset: String(offset) }, updatedAt: new Date() }) - .where(eq(notificationConfigs.id, existing.id)); - } else { - await db.insert(notificationConfigs).values({ - channel: "telegram-issues-offset", - enabled: true, - config: { offset: String(offset) }, - }); - } +export function setOffset(offset: number) { + upsertNotificationConfig("telegram-issues-offset", { offset: String(offset) }); } /** Clear locks that are older than STALE_LOCK_MS (e.g. from crashed processes). */ diff --git a/src/lib/notifications/slack.ts b/src/lib/notifications/slack.ts index 07fa222..da63bd3 100644 --- a/src/lib/notifications/slack.ts +++ b/src/lib/notifications/slack.ts @@ -1,6 +1,6 @@ import nodeFetch from "node-fetch"; -export const SLACK_MAX_MSG_LEN = 40_000; +const SLACK_MAX_MSG_LEN = 40_000; export const SLACK_SAFE_MSG_LEN = 35_000; export interface SlackConfig { @@ -24,7 +24,7 @@ export function isValidSlackAppToken(token: string): boolean { return /^xapp-[A-Za-z0-9-]+$/.test(token); } -export function escapeSlackText(text: string): string { +function escapeSlackText(text: string): string { return text .replace(/&/g, "&") .replace(/): NodeJS.ProcessEnv { diff --git a/src/lib/runner/types.ts b/src/lib/runner/types.ts index 9f8e42e..4ea5b70 100644 --- a/src/lib/runner/types.ts +++ b/src/lib/runner/types.ts @@ -1,4 +1,4 @@ -export interface AgentConfig { +interface AgentConfig { name: string; enabled: boolean; schedule: string; // cron expression diff --git a/src/lib/utils/__tests__/format.test.ts b/src/lib/utils/__tests__/format.test.ts index 7c7b517..4be61cb 100644 --- a/src/lib/utils/__tests__/format.test.ts +++ b/src/lib/utils/__tests__/format.test.ts @@ -1,5 +1,5 @@ import { describe, test, expect } from "bun:test"; -import { formatDuration } from "../format"; +import { formatDuration, formatTokens } from "../format"; describe("formatDuration", () => { test("formats 0ms", () => { @@ -22,3 +22,23 @@ describe("formatDuration", () => { expect(formatDuration(12345)).toBe("12.3s"); }); }); + +describe("formatTokens", () => { + test("formats small numbers as-is", () => { + expect(formatTokens(0)).toBe("0"); + expect(formatTokens(1)).toBe("1"); + expect(formatTokens(999)).toBe("999"); + }); + + test("formats thousands with K suffix", () => { + expect(formatTokens(1000)).toBe("1.0K"); + expect(formatTokens(1500)).toBe("1.5K"); + expect(formatTokens(999_999)).toBe("1000.0K"); + }); + + test("formats millions with M suffix", () => { + expect(formatTokens(1_000_000)).toBe("1.0M"); + expect(formatTokens(2_500_000)).toBe("2.5M"); + expect(formatTokens(10_000_000)).toBe("10.0M"); + }); +}); diff --git a/src/lib/utils/format.ts b/src/lib/utils/format.ts index 7080109..5ea7c42 100644 --- a/src/lib/utils/format.ts +++ b/src/lib/utils/format.ts @@ -5,3 +5,12 @@ export function formatDuration(ms: number): string { if (ms < 1000) return `${ms}ms`; return `${(ms / 1000).toFixed(1)}s`; } + +/** + * Format a token count to a human-readable abbreviated string. + */ +export function formatTokens(n: number): string { + if (n >= 1_000_000) return `${(n / 1_000_000).toFixed(1)}M`; + if (n >= 1_000) return `${(n / 1_000).toFixed(1)}K`; + return String(n); +} diff --git a/src/lib/validations/agent.ts b/src/lib/validations/agent.ts index 12fd878..ee333bc 100644 --- a/src/lib/validations/agent.ts +++ b/src/lib/validations/agent.ts @@ -1,5 +1,5 @@ import { z } from "zod"; -import { DENIED_ENV_KEYS } from "@/lib/runner/agent-memory"; +import { DENIED_ENV_KEYS } from "@/lib/validations/constants"; // Only allow safe cron characters: digits, letters (MON-FRI, JAN-DEC), *, /, -, comma, #, L, W, ? // Uses ` +` (spaces only) instead of `\s+` to prevent embedded newlines/tabs from corrupting crontab diff --git a/src/lib/validations/constants.ts b/src/lib/validations/constants.ts new file mode 100644 index 0000000..fb55f73 --- /dev/null +++ b/src/lib/validations/constants.ts @@ -0,0 +1,8 @@ +/** + * Env var keys that must not be overridden by agent config. + * Shared between agent runner (deny-list enforcement) and validation schemas. + */ +export const DENIED_ENV_KEYS = new Set([ + "PATH", "LD_PRELOAD", "LD_LIBRARY_PATH", "NODE_OPTIONS", + "HOME", "SHELL", "USER", "LOGNAME", "DYLD_INSERT_LIBRARIES", +]); diff --git a/src/lib/validations/mcp.ts b/src/lib/validations/mcp.ts new file mode 100644 index 0000000..43e0f1e --- /dev/null +++ b/src/lib/validations/mcp.ts @@ -0,0 +1,9 @@ +import { z } from "zod"; + +export const mcpServerUpdateSchema = z.object({ + name: z.string().optional(), + command: z.string().optional(), + args: z.array(z.string()).optional(), + env: z.record(z.string(), z.string()).optional(), + enabled: z.boolean().optional(), +}).strict();