Conversation
Closes github#80 Generated by ralph-starter auto mode
Closes github#79 Generated by ralph-starter auto mode
Strip numbered list prefixes, bullet markers, HTML tags, markdown links, and collapse whitespace in task names for cleaner display. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Increase completed task name limit from 25 to 50 chars and loop header task name limit from 40 to 60 chars so task context is preserved in the CLI output. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace hardcoded truncation limits with terminal-width-aware helpers. Task names now adapt to the available terminal space instead of cutting at arbitrary character counts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Show a compact icon (GitHub, Linear, Figma, Notion, etc.) next to task names in the loop header so users can quickly identify where tasks originated from. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the per-character color cycling shimmer effect with a subtle slow pulse between white and cyan. Much more readable and accessible while still providing visual feedback. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add box.ts with drawBox(), drawSeparator(), and renderProgressBar() helpers. Update ProgressRenderer with iteration tracking, progress bar display, and live cost indicator. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace plain separator lines with box-drawing UI throughout the executor. Adds a startup config summary box, per-iteration header boxes with agent/iteration info, and a clean completion banner with stats. Wire progress bar with iteration and cost tracking. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Show a status separator between iterations with iteration count, task progress, cost, and elapsed time. Replace verbose validation error output with a compact one-line summary. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…s support - Add .agents/skills/ directory scanning (multi-agent skill sharing) - Support subdirectories with SKILL.md inside (not just flat .md files) - Parse YAML frontmatter for skill name and description - Parse npx add-skill commands from skills.sh - Add findSkill() helper for looking up skills by name - Refactor directory scanning into reusable scanSkillsDir() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add 6 curated skill entries across 4 categories (agents, development, testing, design) - Group skills by category in list output with visual separators - Add 'info' action to show installed skill details - Search now matches against categories - Add interactive browse with categorized choices Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Read version dynamically from package.json instead of hardcoding '0.1.0'. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add ralph_list_presets tool to discover all 19 workflow presets by category - Add ralph_fetch_spec tool to preview specs from GitHub, Linear, Notion, and Figma without running the full coding loop - Improve all existing tool descriptions with detailed context for LLM clients - ralph_fetch_spec supports Figma modes (spec, tokens, components, content, assets) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add figma_to_code prompt for Figma design-to-code workflow with framework and mode selection (spec, tokens, components, content) - Add batch_issues prompt for processing multiple GitHub/Linear issues automatically with auto mode - Update fetch_and_build prompt to include Figma as a source option - Update fetch_and_build to use ralph_fetch_spec for preview before building Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add .ralph/activity.md as a readable MCP resource so Claude Desktop and other MCP clients can access loop execution history, timing data, and cost information. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Introduces a context builder that progressively narrows the prompt sent to agents across loop iterations. Iteration 1 gets full context (spec + skills + plan), iterations 2-3 get trimmed plan context, and iterations 4+ get minimal context with just the current task. Validation feedback is compressed to reduce token waste. Adds --context-budget flag for optional token budget enforcement. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replaces raw fetch calls to Anthropic API with the official @anthropic-ai/sdk, enabling prompt caching via cache_control on system messages. Cache reads are 90% cheaper than regular input tokens. Adds cache-aware pricing to cost tracker with savings metrics displayed in CLI output and activity summaries. Also adds system message support and usage tracking for OpenAI/OpenRouter providers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds --batch flag to ralph-starter auto command that submits tasks via the Anthropic Batch API instead of running agent loops. Batch requests are processed asynchronously at 50% cost reduction. Includes polling with exponential backoff, progress display, and cost savings summary. Note: batch mode uses the API directly (no tool use), best for planning, code generation, and review tasks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix batch_issues prompt: use ralph_run with auto mode instead of incorrectly referencing ralph_fetch_spec for listing issues - Fix handleListPresets category filter: use strict equality instead of substring match to prevent unintended matches - handleFetchSpec already passes path to fetchFromIntegration (linter fix) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Normalize framework casing in figma_to_code prompt (consistent display name) - Guard undefined args in handleListPresets with fallback to empty object - Enforce non-empty path in ralph_fetch_spec schema (.min(1)) and always assign path to options instead of truthiness check Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ns (#133) - Dynamic task name truncation based on terminal width - Source icons for GitHub, Linear, Notion, Figma tasks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Intelligent context trimming based on iteration progress - Reduces token usage by focusing on relevant context per loop - New context-builder module for structured prompt assembly Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Enhanced MCP tools and prompts for better agent integration - Improved resource definitions for Claude Desktop - Figma-to-code MCP prompt support Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Enhanced skill detection for project tech stacks - Improved skill registry with better search - Better skill matching for coding agents Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Restore drawSeparator import in executor.ts - Remove conflicting getPackageVersion import in server.ts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Merge PR #139 - adds prompt caching via Anthropic SDK beta header for reduced latency and cost on repeated API calls. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Merge PR #140 - adds --batch flag to auto mode that submits tasks via the Anthropic Batch API for 50% cost savings. Includes polling with exponential backoff and progress display. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add empty requests guard in submitBatch - Add retry logic for transient errors in polling loop - Extract taskCustomId helper to avoid duplicated pattern - Add pricing caveat for non-Sonnet models Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Merge PR #122 - adds pause/resume commands and session state management for recovering from rate limits gracefully. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds session persistence with pause/resume CLI commands, Anthropic Batch API support and batch-mode prompts, iterative context builder and cache-aware cost tracking, UI/UX improvements (boxed output, progress bar), rate-limit display utilities, enhanced skill discovery, and new CLI options (--context-budget, --batch, --model). Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Deploying ralph-starter with
|
| Latest commit: |
f521073
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://8d0a0fc9.ralph-starter.pages.dev |
| Branch Preview URL: | https://staging-pre-conference.ralph-starter.pages.dev |
Add `stable-release` label support to prepare-release workflow. When a PR with this label is merged to main, the workflow strips the prerelease suffix and applies a proper semver bump instead of incrementing the beta number. Example: 0.1.1-beta.16 + feat PR + stable-release → 0.2.0 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/commands/skill.ts (1)
265-285: 🧹 Nitpick | 🔵 Trivial
browseSkillsdoesn't show category information in the selection list.Unlike
listSkillswhich now groups by category, the interactive browser just shows a flat list. Consider grouping the choices or at least showing the category in the choice name for consistency.♻️ Suggested: Add category to browse choices
async function browseSkills(): Promise<void> { + // Group choices by category + const choices: Array<{ name: string; value: string | null } | inquirer.Separator> = []; + const categories = Object.keys(CATEGORY_LABELS); + + for (const category of categories) { + const categorySkills = POPULAR_SKILLS.filter((s) => s.category === category); + if (categorySkills.length > 0) { + choices.push(new inquirer.Separator(chalk.dim(`── ${CATEGORY_LABELS[category]} ──`))); + for (const repo of categorySkills) { + choices.push({ + name: `${repo.name} - ${chalk.dim(repo.description)}`, + value: repo.name, + }); + } + } + } + choices.push(new inquirer.Separator()); + choices.push({ name: chalk.dim('Cancel'), value: null }); + const { skill } = await inquirer.prompt([ { type: 'list', name: 'skill', message: 'Select a skill to install:', - choices: [ - ...POPULAR_SKILLS.map((repo) => ({ - name: `${repo.name} - ${chalk.dim(repo.description)}`, - value: repo.name, - })), - new inquirer.Separator(), - { name: chalk.dim('Cancel'), value: null }, - ], + choices, }, ]);src/ui/progress-renderer.ts (1)
38-55:⚠️ Potential issue | 🟡 MinorReset iteration/cost state when starting a new render.
The new fields aren’t cleared instart(), so reusing the renderer can show stale progress/cost.🛠️ Suggested fix
start(initialStep: string): void { this.currentStep = initialStep; this.startTime = Date.now(); this.frame = 0; this.subStep = ''; + this.currentIteration = 0; + this.maxIterations = 0; + this.currentCost = 0;src/commands/run.ts (1)
191-219:⚠️ Potential issue | 🟡 MinorDon’t drop
--context-budget 0due to a truthy check.
options.contextBudget ?treats 0 as undefined, so an explicit zero is ignored. Use an explicit undefined check to respect CLI input (or validate and fail fast).🛠️ Suggested fix
- contextBudget: options.contextBudget ? Number(options.contextBudget) : undefined, + contextBudget: + options.contextBudget !== undefined ? Number(options.contextBudget) : undefined,Also applies to: 566-590
src/loop/cost-tracker.ts (1)
14-36: 🧹 Nitpick | 🔵 TrivialConsider adding pricing date comment for maintainability.
The pricing comment says "January 2026" but model pricing changes frequently. Consider adding a TODO or mechanism to validate/update pricing periodically.
-// Pricing as of January 2026 (approximate) +// Pricing as of January 2026 (approximate) +// TODO: Verify pricing at https://www.anthropic.com/pricing periodically
🤖 Fix all issues with AI agents
In `@docs/docs/advanced/rate-limits.md`:
- Around line 34-52: The fenced code block beginning with "⚠ Claude rate limit
reached" in the rate-limits documentation is unlabeled (MD040); update the
opening fence to include a language identifier such as text or console (e.g.,
```text) so markdownlint passes and rendering improves, leaving the block
content unchanged and keeping the closing ``` as-is.
In `@src/commands/auto.ts`:
- Around line 358-369: Replace the hardcoded Sonnet per‑million token rates and
duplicated math in auto.ts with the shared pricing and helper from
cost-tracker.ts: import MODEL_PRICING and the calculateCost utility, compute the
base cost via calculateCost(totalInputTokens, totalOutputTokens,
MODEL_PRICING[‘sonnet’] or equivalent), then apply the 50% batch discount as a
separate multiplier to get the displayed estimate and compute the fullPriceCost
for the "saved" delta; update references to
inputCost/outputCost/totalCost/fullPriceCost to use these reused values rather
than the inline constants so pricing stays consistent and maintainable.
In `@src/commands/resume.ts`:
- Around line 152-159: The current rate-limit detection uses a case-sensitive
check on result.error via isRateLimit = result.error?.includes('Rate limit');
update this to a case-insensitive check (for example, normalize the error string
with toLowerCase() or use a case-insensitive regex) so variations like "rate
limit", "Rate Limit", or "RATE LIMIT" are detected; update the isRateLimit
assignment and any downstream logic that depends on it (referencing result.error
and isRateLimit in resume.ts) to use the normalized comparison.
- Around line 111-130: The reconstructed LoopOptions object in resume.ts is
missing the contextBudget field, so restore it by adding contextBudget:
session.options.contextBudget to the loopOptions assignment (referencing
LoopOptions and loopOptions) so resumed sessions preserve the original
--context-budget setting; update any type/interface if needed to accept
contextBudget from session.options.contextBudget.
In `@src/commands/skill.ts`:
- Around line 106-108: The 'info' branch should validate skillName like the
other branches: before calling showSkillInfo, check that skillName (the parsed
argument used in the switch case 'info') is defined and non-empty and if not,
log the same error/usage message and exit/return as done for the 'add' branch;
otherwise call showSkillInfo(skillName). Update the switch case handling for
'info' to mirror the validation logic used around 'add' so behavior is
consistent across commands.
- Around line 254-262: The current try/catch prints the entire skill file (using
readFileSync(skill.path, 'utf-8')) which can flood the terminal for very large
files; before reading, perform a file size check (fs.statSync or
fs.promises.stat) on skill.path and if the size exceeds a safe threshold (e.g.,
100KB) avoid printing the whole file—either read and display only the first N
bytes/lines (e.g., readFileSync(...).slice(0, MAX_BYTES) or stream the file and
stop after MAX_BYTES) and print a clear truncation/warning message (e.g.,
"Output truncated: file X KB, showing first Y KB"), or offer a follow-up hint to
view the full file. Update the try/catch block around readFileSync(skill.path,
...) to implement this size check and truncated output behavior and reference
skill.path and the local read/print logic so the change is easy to locate.
- Around line 160-175: The current category list (categories) is derived from
new Set(POPULAR_SKILLS.map(...)) which relies on insertion order of
POPULAR_SKILLS and can yield inconsistent ordering; change the iteration to use
the explicit ordering from CATEGORY_LABELS (e.g., iterate
Object.keys(CATEGORY_LABELS) or the CATEGORY_LABELS entry order) and for each
category filter POPULAR_SKILLS (as done with categorySkills) so categories are
always displayed in the defined CATEGORY_LABELS order (update the loop that
currently iterates over categories to use CATEGORY_LABELS instead).
In `@src/llm/api.ts`:
- Around line 61-69: The singleton anthropicClient currently locks in the first
API key so subsequent calls to getAnthropicClient(...) keep using stale
credentials; update getAnthropicClient to either maintain a cache keyed by
apiKey (e.g., a Map from apiKey to Anthropic) or detect when the passed apiKey
differs from the existing anthropicClient and recreate the client with new
credentials (using Anthropic and API_TIMEOUT_MS), ensuring requests use the
correct key.
In `@src/loop/context-builder.ts`:
- Around line 131-139: Remove the unused destructured variable fullTask from the
ContextBuilder function signature (currently destructured alongside
taskWithSkills, currentTask, taskInfo, iteration, validationFeedback,
maxInputTokens) and update the ContextBuildOptions interface to drop the
fullTask property if it isn't used elsewhere; ensure all references in this file
use taskWithSkills (which contains the full task) and run the build/tests to
confirm no other code expects ContextBuildOptions.fullTask.
- Around line 154-169: The current iteration-1 context always renders subtasks
as incomplete; update the mapping that builds subtasksList in the iteration ===
1 branch (inside build... context code that sets prompt) to mirror
buildTrimmedPlanContext's logic by rendering each subtask as "- [x] name" when
the subtask's completion boolean (the same property used in
buildTrimmedPlanContext, e.g., st.done or st.completed) is true and "- [ ] name"
when false, keeping the join('\n') fallback; ensure you reference
currentTask.subtasks and the same completion property to preserve consistent
status display.
- Around line 206-218: The current aggressive truncation (in context-builder.ts:
variables prompt, estimatedTokens, maxInputTokens, targetChars) can cut
structured content mid-section; change the truncation to cut at a safe boundary:
compute the desired char limit (targetChars) but then search backward from that
position for natural breakpoints (preferably the last double-newline "\n\n",
then last Markdown header pattern like "\n# ", then last list/line-break), and
ensure you are not inside an open code fence by scanning for unmatched "```"
toggles before the cut; if a safe breakpoint is found, truncate prompt there,
otherwise fall back to the original slice, set wasTrimmed and push the
debugParts message as before, and append the truncation notice.
In `@src/loop/session.ts`:
- Around line 113-127: The loadSession function currently only checks
session.id, session.task, and session.agent; update loadSession to validate and
normalize the full SessionState shape (including stats, options, commits, and
any counters used later) before returning so callers won’t crash on
stale/corrupt files—after parsing the JSON (in loadSession) ensure missing
fields are populated with safe defaults (e.g., empty objects/arrays or zeroed
counters) and coerce types where needed, referencing SessionState, stats,
options, commits, and any counter properties accessed later (see uses around
line numbers referenced in the review) so the returned session always has the
expected structure.
- Around line 270-303: The formatSessionSummary function should sanitize
user-provided strings (session.task, session.pauseReason, session.error) before
including them in terminal output to prevent control characters/ANSI injection;
add or call a small sanitizer (e.g., strip ANSI escape sequences and control
chars via regex matching C0 controls \x00-\x1F and \x7F and ANSI CSI sequences
like \x1b\[[0-9;]*[A-Za-z]) and use the sanitized value for slicing and all
lines.push calls (update references in formatSessionSummary where session.task
is sliced and where session.pauseReason and session.error are appended) so no
raw user string is written to the terminal.
- Around line 133-137: The saveSession function currently writes directly to the
session file (getSessionPath) which can lead to corruption and world-readable
data; change saveSession to perform an atomic write by writing JSON to a
temporary file in the same directory (e.g., sessionPath + '.tmp' or using a
unique suffix), fsyncing the temp file (or its fd) to ensure contents hit disk,
set the temp file's mode to 0o600 (owner-only) when creating/writing, then
rename the temp file to the final sessionPath (atomic on POSIX), and ensure any
temp file is cleaned up on error; update references to fs.writeFile usage in
saveSession to use open/write/fsync/close/rename sequence to implement this.
In `@src/loop/skills.ts`:
- Around line 87-114: The try/catch around statSync(fullPath) then
readFileSync(fullPath, 'utf-8') introduces a TOCTOU window; change the pattern
in the loop that builds skills (the block that currently calls statSync,
stats.isFile(), readFileSync, and the SKILL.md branch) to instead attempt to
read the file directly and handle errors: first try readFileSync(fullPath,
'utf-8') and if it succeeds treat it as a flat .md skill (use
extractName/extractDescription and push to skills), if readFileSync throws
EISDIR then try reading join(fullPath, 'SKILL.md') and if that succeeds treat it
as the directory skill, and otherwise ignore ENOENT/permission errors in the
catch — remove the separate statSync usage so you eliminate the race while
keeping extractName, extractDescription, and skills push logic intact.
In `@src/mcp/server.ts`:
- Around line 18-27: The local getPackageVersion() duplicates shared logic from
src/utils/version.ts; remove this local function and instead import and call the
shared exporter (e.g., import { getPackageVersion } from 'src/utils/version' or
the actual exported name in that module) wherever getPackageVersion() is used in
this file (server.ts), preserving the original call sites so the module uses the
cached/multi-candidate logic from the utility rather than the duplicate
implementation.
In `@src/ui/box.ts`:
- Around line 56-68: The renderProgressBar function can divide by zero when
total <= 0; update renderProgressBar to guard against non-positive totals by
computing ratio = 0 when total <= 0 (or otherwise use Math.min(1, Math.max(0,
current/total))) so you never compute current/total with total <= 0; locate the
function renderProgressBar in src/ui/box.ts and adjust the ratio calculation to
handle total <= 0 (also consider clamping current to [0, total] before computing
the ratio).
In `@src/ui/progress-renderer.ts`:
- Around line 109-127: The clear logic assumes the previous render had the same
number of lines as the current render, which leaves a stale bar when
this.maxIterations goes from >0 to 0; update the clearing logic in the
ProgressRenderer (variables: this.lastRender, lineCount, clearUp, clear,
process.stdout.write) to compute the previous render's line count (e.g.,
prevLineCount = this.lastRender ? this.lastRender.split('\n').length : 0) and
use prevLineCount when building clearUp so you always clear all previously
rendered lines (use prevLineCount for the up/clear sequence when this.lastRender
exists, and fall back to '\r\x1B[K' on first render) before writing the new
line.
docs/docs/advanced/rate-limits.md
Outdated
| ``` | ||
| ⚠ Claude rate limit reached | ||
|
|
||
| Rate Limit Stats: | ||
| • Session usage: 100% (50K / 50K tokens) | ||
| • Requests made: 127 this hour | ||
| • Time until reset: ~47 minutes (resets at 04:30 UTC) | ||
|
|
||
| Session Progress: | ||
| • Tasks completed: 3/5 | ||
| • Current task: "Add swarm mode CLI flags" | ||
| • Branch: auto/github-54 | ||
| • Iterations completed: 12 | ||
|
|
||
| To resume when limit resets: | ||
| ralph-starter run | ||
|
|
||
| Tip: Check your limits at https://claude.ai/settings | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced block (MD040).
markdownlint flags the unlabeled fence; tagging it as text (or console) clears the lint and improves rendering.
🔧 Suggested fix
-```
+```text
⚠ Claude rate limit reached
...
-```
+```🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 34-34: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@docs/docs/advanced/rate-limits.md` around lines 34 - 52, The fenced code
block beginning with "⚠ Claude rate limit reached" in the rate-limits
documentation is unlabeled (MD040); update the opening fence to include a
language identifier such as text or console (e.g., ```text) so markdownlint
passes and rendering improves, leaving the block content unchanged and keeping
the closing ``` as-is.
| // Approximate cost at Sonnet pricing with 50% batch discount | ||
| // Note: actual cost varies by model (Haiku is ~10x cheaper, Opus ~5x more) | ||
| const inputCost = (totalInputTokens / 1_000_000) * 3 * 0.5; | ||
| const outputCost = (totalOutputTokens / 1_000_000) * 15 * 0.5; | ||
| const totalCost = inputCost + outputCost; | ||
| const fullPriceCost = inputCost * 2 + outputCost * 2; | ||
|
|
||
| console.log(` ${chalk.dim('Tokens:')} ${totalInputTokens} in / ${totalOutputTokens} out`); | ||
| console.log( | ||
| ` ${chalk.dim('Est. cost (Sonnet):')} $${totalCost.toFixed(4)} (saved $${(fullPriceCost - totalCost).toFixed(4)} vs standard pricing)` | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded pricing may become stale.
The Sonnet pricing ($3/$15 per million tokens) is hardcoded here. While the comment acknowledges model variations, this could drift from actual pricing. Consider extracting to MODEL_PRICING in cost-tracker.ts for consistency.
The cost calculation here duplicates logic from cost-tracker.ts. For maintainability, consider reusing MODEL_PRICING and calculateCost utilities, applying the 50% batch discount factor separately.
🤖 Prompt for AI Agents
In `@src/commands/auto.ts` around lines 358 - 369, Replace the hardcoded Sonnet
per‑million token rates and duplicated math in auto.ts with the shared pricing
and helper from cost-tracker.ts: import MODEL_PRICING and the calculateCost
utility, compute the base cost via calculateCost(totalInputTokens,
totalOutputTokens, MODEL_PRICING[‘sonnet’] or equivalent), then apply the 50%
batch discount as a separate multiplier to get the displayed estimate and
compute the fullPriceCost for the "saved" delta; update references to
inputCost/outputCost/totalCost/fullPriceCost to use these reused values rather
than the inline constants so pricing stays consistent and maintainable.
| const loopOptions: LoopOptions = { | ||
| task: session.task, | ||
| cwd: session.cwd, | ||
| agent, | ||
| maxIterations: remainingIterations, | ||
| auto: session.options.auto, | ||
| commit: session.options.commit, | ||
| push: session.options.push, | ||
| pr: session.options.pr, | ||
| prTitle: session.options.prTitle, | ||
| validate: session.options.validate, | ||
| completionPromise: session.options.completionPromise, | ||
| requireExitSignal: session.options.requireExitSignal, | ||
| circuitBreaker: session.options.circuitBreaker, | ||
| rateLimit: session.options.rateLimit, | ||
| trackProgress: session.options.trackProgress, | ||
| checkFileCompletion: session.options.checkFileCompletion, | ||
| trackCost: session.options.trackCost, | ||
| model: session.options.model, | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider restoring contextBudget from session options.
The LoopOptions reconstruction includes most session options but appears to be missing contextBudget. If a session was started with --context-budget, that setting won't be preserved on resume.
♻️ Proposed fix to include contextBudget
trackProgress: session.options.trackProgress,
checkFileCompletion: session.options.checkFileCompletion,
trackCost: session.options.trackCost,
model: session.options.model,
+ contextBudget: session.options.contextBudget,
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const loopOptions: LoopOptions = { | |
| task: session.task, | |
| cwd: session.cwd, | |
| agent, | |
| maxIterations: remainingIterations, | |
| auto: session.options.auto, | |
| commit: session.options.commit, | |
| push: session.options.push, | |
| pr: session.options.pr, | |
| prTitle: session.options.prTitle, | |
| validate: session.options.validate, | |
| completionPromise: session.options.completionPromise, | |
| requireExitSignal: session.options.requireExitSignal, | |
| circuitBreaker: session.options.circuitBreaker, | |
| rateLimit: session.options.rateLimit, | |
| trackProgress: session.options.trackProgress, | |
| checkFileCompletion: session.options.checkFileCompletion, | |
| trackCost: session.options.trackCost, | |
| model: session.options.model, | |
| }; | |
| const loopOptions: LoopOptions = { | |
| task: session.task, | |
| cwd: session.cwd, | |
| agent, | |
| maxIterations: remainingIterations, | |
| auto: session.options.auto, | |
| commit: session.options.commit, | |
| push: session.options.push, | |
| pr: session.options.pr, | |
| prTitle: session.options.prTitle, | |
| validate: session.options.validate, | |
| completionPromise: session.options.completionPromise, | |
| requireExitSignal: session.options.requireExitSignal, | |
| circuitBreaker: session.options.circuitBreaker, | |
| rateLimit: session.options.rateLimit, | |
| trackProgress: session.options.trackProgress, | |
| checkFileCompletion: session.options.checkFileCompletion, | |
| trackCost: session.options.trackCost, | |
| model: session.options.model, | |
| contextBudget: session.options.contextBudget, | |
| }; |
🤖 Prompt for AI Agents
In `@src/commands/resume.ts` around lines 111 - 130, The reconstructed LoopOptions
object in resume.ts is missing the contextBudget field, so restore it by adding
contextBudget: session.options.contextBudget to the loopOptions assignment
(referencing LoopOptions and loopOptions) so resumed sessions preserve the
original --context-budget setting; update any type/interface if needed to accept
contextBudget from session.options.contextBudget.
| // Check if it's a rate limit issue | ||
| const isRateLimit = result.error?.includes('Rate limit'); | ||
|
|
||
| if (isRateLimit) { | ||
| console.log(chalk.yellow.bold(' ⏸ Session paused due to rate limit')); | ||
| console.log(); | ||
| console.log(chalk.dim(' To resume later, run:')); | ||
| console.log(chalk.cyan(' ralph-starter resume')); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Rate limit detection could be more robust.
The check result.error?.includes('Rate limit') is case-sensitive and may miss variations like "rate limit" or "Rate Limit". Consider a case-insensitive check for resilience.
🛡️ Proposed improvement
// Check if it's a rate limit issue
- const isRateLimit = result.error?.includes('Rate limit');
+ const isRateLimit = result.error?.toLowerCase().includes('rate limit');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if it's a rate limit issue | |
| const isRateLimit = result.error?.includes('Rate limit'); | |
| if (isRateLimit) { | |
| console.log(chalk.yellow.bold(' ⏸ Session paused due to rate limit')); | |
| console.log(); | |
| console.log(chalk.dim(' To resume later, run:')); | |
| console.log(chalk.cyan(' ralph-starter resume')); | |
| // Check if it's a rate limit issue | |
| const isRateLimit = result.error?.toLowerCase().includes('rate limit'); | |
| if (isRateLimit) { | |
| console.log(chalk.yellow.bold(' ⏸ Session paused due to rate limit')); | |
| console.log(); | |
| console.log(chalk.dim(' To resume later, run:')); | |
| console.log(chalk.cyan(' ralph-starter resume')); |
🤖 Prompt for AI Agents
In `@src/commands/resume.ts` around lines 152 - 159, The current rate-limit
detection uses a case-sensitive check on result.error via isRateLimit =
result.error?.includes('Rate limit'); update this to a case-insensitive check
(for example, normalize the error string with toLowerCase() or use a
case-insensitive regex) so variations like "rate limit", "Rate Limit", or "RATE
LIMIT" are detected; update the isRateLimit assignment and any downstream logic
that depends on it (referencing result.error and isRateLimit in resume.ts) to
use the normalized comparison.
| case 'info': | ||
| await showSkillInfo(skillName); | ||
| break; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Missing skillName validation before calling showSkillInfo.
While showSkillInfo handles the undefined case internally, this is inconsistent with other actions like add which validate at the switch level (lines 85-89). Consider either: (a) keeping it consistent by validating here, or (b) documenting that the function handles its own validation. Current approach works but is slightly asymmetric.
🤖 Prompt for AI Agents
In `@src/commands/skill.ts` around lines 106 - 108, The 'info' branch should
validate skillName like the other branches: before calling showSkillInfo, check
that skillName (the parsed argument used in the switch case 'info') is defined
and non-empty and if not, log the same error/usage message and exit/return as
done for the 'add' branch; otherwise call showSkillInfo(skillName). Update the
switch case handling for 'info' to mirror the validation logic used around 'add'
so behavior is consistent across commands.
| export function formatSessionSummary(session: SessionState): string { | ||
| const lines: string[] = []; | ||
|
|
||
| lines.push(`Session: ${session.id}`); | ||
| lines.push(`Status: ${session.status}`); | ||
| lines.push(`Task: ${session.task.slice(0, 60)}${session.task.length > 60 ? '...' : ''}`); | ||
| lines.push(`Progress: ${session.iteration}/${session.maxIterations} iterations`); | ||
| lines.push(`Agent: ${session.agent.name}`); | ||
|
|
||
| if (session.commits.length > 0) { | ||
| lines.push(`Commits: ${session.commits.length}`); | ||
| } | ||
|
|
||
| const duration = session.stats.totalDuration; | ||
| if (duration > 0) { | ||
| const minutes = Math.floor(duration / 60000); | ||
| const seconds = Math.floor((duration % 60000) / 1000); | ||
| lines.push(`Duration: ${minutes}m ${seconds}s`); | ||
| } | ||
|
|
||
| if (session.stats.costStats) { | ||
| const cost = session.stats.costStats.totalCost.totalCost; | ||
| lines.push(`Cost: $${cost.toFixed(3)}`); | ||
| } | ||
|
|
||
| if (session.pauseReason) { | ||
| lines.push(`Pause reason: ${session.pauseReason}`); | ||
| } | ||
|
|
||
| if (session.error) { | ||
| lines.push(`Error: ${session.error}`); | ||
| } | ||
|
|
||
| return lines.join('\n'); |
There was a problem hiding this comment.
Sanitize user-provided strings before terminal output.
task, pauseReason, and error can carry control characters or ANSI sequences. This can cause terminal injection/garbled output. Strip control chars before printing.
🧹 Suggested sanitization
export function formatSessionSummary(session: SessionState): string {
const lines: string[] = [];
+ const sanitize = (value: string) => value.replace(/[\x00-\x1F\x7F]/g, '');
lines.push(`Session: ${session.id}`);
lines.push(`Status: ${session.status}`);
- lines.push(`Task: ${session.task.slice(0, 60)}${session.task.length > 60 ? '...' : ''}`);
+ const safeTask = sanitize(session.task);
+ lines.push(`Task: ${safeTask.slice(0, 60)}${safeTask.length > 60 ? '...' : ''}`);
lines.push(`Progress: ${session.iteration}/${session.maxIterations} iterations`);
lines.push(`Agent: ${session.agent.name}`);
@@
- if (session.pauseReason) {
- lines.push(`Pause reason: ${session.pauseReason}`);
+ if (session.pauseReason) {
+ lines.push(`Pause reason: ${sanitize(session.pauseReason)}`);
}
- if (session.error) {
- lines.push(`Error: ${session.error}`);
+ if (session.error) {
+ lines.push(`Error: ${sanitize(session.error)}`);
}🤖 Prompt for AI Agents
In `@src/loop/session.ts` around lines 270 - 303, The formatSessionSummary
function should sanitize user-provided strings (session.task,
session.pauseReason, session.error) before including them in terminal output to
prevent control characters/ANSI injection; add or call a small sanitizer (e.g.,
strip ANSI escape sequences and control chars via regex matching C0 controls
\x00-\x1F and \x7F and ANSI CSI sequences like \x1b\[[0-9;]*[A-Za-z]) and use
the sanitized value for slicing and all lines.push calls (update references in
formatSessionSummary where session.task is sliced and where session.pauseReason
and session.error are appended) so no raw user string is written to the
terminal.
| function getPackageVersion(): string { | ||
| try { | ||
| const __dirname = dirname(fileURLToPath(import.meta.url)); | ||
| const pkgPath = join(__dirname, '..', '..', 'package.json'); | ||
| const pkg = JSON.parse(readFileSync(pkgPath, 'utf-8')); | ||
| return pkg.version || '0.1.0'; | ||
| } catch { | ||
| return '0.1.0'; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
# Find tsconfig.json and inspect its outDir/rootDir settings
cat tsconfig.jsonRepository: rubenmarcus/ralph-starter
Length of output: 526
🏁 Script executed:
# List files in dist/ to see the actual compiled structure
find dist -type f -name "server.js" 2>/dev/null | head -20Repository: rubenmarcus/ralph-starter
Length of output: 51
🏁 Script executed:
# Check if src/utils/version.ts exists and examine its implementation
cat -n src/utils/version.ts 2>/dev/null || echo "File not found"Repository: rubenmarcus/ralph-starter
Length of output: 1546
🏁 Script executed:
# Verify the actual path resolution by checking project structure
git ls-files | grep -E "(tsconfig|package\.json)" | head -10Repository: rubenmarcus/ralph-starter
Length of output: 132
🏁 Script executed:
# Check if src/mcp/server.ts imports getPackageVersion from utils
grep -n "getPackageVersion\|from.*utils/version\|import.*version" src/mcp/server.tsRepository: rubenmarcus/ralph-starter
Length of output: 149
🏁 Script executed:
# Check where getPackageVersion is called in server.ts
grep -n "getPackageVersion()" src/mcp/server.tsRepository: rubenmarcus/ralph-starter
Length of output: 149
Consolidate version resolution with existing utility.
The path ../../package.json is correct for the configured build layout (outDir=./dist, rootDir=./src), so the function will resolve properly. However, getPackageVersion() duplicates logic from src/utils/version.ts, which already handles the same concern with result caching and multi-candidate fallback. Import and reuse the shared utility instead to avoid drift and maintain consistency.
🤖 Prompt for AI Agents
In `@src/mcp/server.ts` around lines 18 - 27, The local getPackageVersion()
duplicates shared logic from src/utils/version.ts; remove this local function
and instead import and call the shared exporter (e.g., import {
getPackageVersion } from 'src/utils/version' or the actual exported name in that
module) wherever getPackageVersion() is used in this file (server.ts),
preserving the original call sites so the module uses the cached/multi-candidate
logic from the utility rather than the duplicate implementation.
| export function renderProgressBar( | ||
| current: number, | ||
| total: number, | ||
| options: { width?: number; label?: string } = {} | ||
| ): string { | ||
| const barWidth = options.width || 20; | ||
| const ratio = Math.min(1, Math.max(0, current / total)); | ||
| const filled = Math.round(ratio * barWidth); | ||
| const empty = barWidth - filled; | ||
| const bar = `${'█'.repeat(filled)}${'░'.repeat(empty)}`; | ||
| const info = options.label ? ` │ ${options.label}` : ''; | ||
| return `${chalk.cyan(bar)} ${current}/${total}${chalk.dim(info)}`; | ||
| } |
There was a problem hiding this comment.
Guard against total <= 0 to avoid divide-by-zero bars.
When total is 0, current/total becomes Infinity/NaN and renders a misleading full bar.
🛠️ Suggested fix
export function renderProgressBar(
current: number,
total: number,
options: { width?: number; label?: string } = {}
): string {
const barWidth = options.width || 20;
+ if (total <= 0) {
+ const bar = '░'.repeat(barWidth);
+ const info = options.label ? ` │ ${options.label}` : '';
+ return `${chalk.cyan(bar)} 0/0${chalk.dim(info)}`;
+ }
const ratio = Math.min(1, Math.max(0, current / total));
const filled = Math.round(ratio * barWidth);🤖 Prompt for AI Agents
In `@src/ui/box.ts` around lines 56 - 68, The renderProgressBar function can
divide by zero when total <= 0; update renderProgressBar to guard against
non-positive totals by computing ratio = 0 when total <= 0 (or otherwise use
Math.min(1, Math.max(0, current/total))) so you never compute current/total with
total <= 0; locate the function renderProgressBar in src/ui/box.ts and adjust
the ratio calculation to handle total <= 0 (also consider clamping current to
[0, total] before computing the ratio).
| // Progress bar line (if iteration info is available) | ||
| if (this.maxIterations > 0) { | ||
| const barWidth = 16; | ||
| const ratio = Math.min(1, this.currentIteration / this.maxIterations); | ||
| const filled = Math.round(ratio * barWidth); | ||
| const empty = barWidth - filled; | ||
| const bar = `${'█'.repeat(filled)}${'░'.repeat(empty)}`; | ||
| const costStr = this.currentCost > 0 ? ` │ $${this.currentCost.toFixed(2)}` : ''; | ||
| line += `\n ${chalk.cyan(bar)} ${chalk.dim(`${this.currentIteration}/${this.maxIterations}${costStr}`)}`; | ||
| } | ||
|
|
||
| // Only update if changed (reduces flicker) | ||
| if (line !== this.lastRender) { | ||
| process.stdout.write(`\r\x1B[K${line}`); | ||
| // Clear current line(s) and write | ||
| const lineCount = this.maxIterations > 0 ? 2 : 1; | ||
| const clearUp = lineCount > 1 ? `\x1B[${lineCount - 1}A\r\x1B[J` : '\r\x1B[K'; | ||
| // On first render, don't try to go up | ||
| const clear = this.lastRender ? clearUp : '\r\x1B[K'; | ||
| process.stdout.write(`${clear}${line}`); |
There was a problem hiding this comment.
Clear logic leaves a stale bar if maxIterations drops to 0.
If the bar was previously rendered and later hidden, only one line is cleared, leaving the old bar line visible.
🛠️ Suggested fix
- const lineCount = this.maxIterations > 0 ? 2 : 1;
- const clearUp = lineCount > 1 ? `\x1B[${lineCount - 1}A\r\x1B[J` : '\r\x1B[K';
+ const hadBar = this.lastRender.includes('\n');
+ const lineCount = this.maxIterations > 0 || hadBar ? 2 : 1;
+ const clearUp = lineCount > 1 ? `\x1B[${lineCount - 1}A\r\x1B[J` : '\r\x1B[K';🤖 Prompt for AI Agents
In `@src/ui/progress-renderer.ts` around lines 109 - 127, The clear logic assumes
the previous render had the same number of lines as the current render, which
leaves a stale bar when this.maxIterations goes from >0 to 0; update the
clearing logic in the ProgressRenderer (variables: this.lastRender, lineCount,
clearUp, clear, process.stdout.write) to compute the previous render's line
count (e.g., prevLineCount = this.lastRender ?
this.lastRender.split('\n').length : 0) and use prevLineCount when building
clearUp so you always clear all previously rendered lines (use prevLineCount for
the up/clear sequence when this.lastRender exists, and fall back to '\r\x1B[K'
on first render) before writing the new line.
- Update pnpm-lock.yaml to include @anthropic-ai/sdk dependency - Remove duplicate rate-limits.md doc (rate-limiting.md already exists) - Use 'release' label instead of 'stable-release' in workflow Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
| Metric | Value |
|---|---|
| Base | 1542.88 KB |
| PR | 1674.49 KB |
| Diff | 131.61 KB (8.00%) |
Bundle breakdown
156K dist/auth
28K dist/automation
4.0K dist/cli.d.ts
4.0K dist/cli.d.ts.map
16K dist/cli.js
12K dist/cli.js.map
412K dist/commands
28K dist/config
4.0K dist/index.d.ts
4.0K dist/index.d.ts.map
4.0K dist/index.js
4.0K dist/index.js.map
536K dist/integrations
84K dist/llm
648K dist/loop
172K dist/mcp
32K dist/presets
92K dist/setup
392K dist/sources
76K dist/ui
44K dist/utils
304K dist/wizard
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/workflows/prepare-release.yml (2)
52-54: 🧹 Nitpick | 🔵 TrivialHardcoded sleep for API consistency is fragile but common.
The
sleep 5delay for GitHub API eventual consistency is a pragmatic workaround. While this usually works, it's worth noting that under heavy load, 5 seconds may not always be sufficient. Consider adding a brief comment explaining this is for API consistency, or implementing a retry loop if you encounter intermittent "PR not found" issues in the future.
258-279:⚠️ Potential issue | 🟡 MinorAdd error handling for git operations.
The git checkout and reset operations (lines 262-263) don't have explicit error handling. If
git checkout "$RELEASE_BRANCH"fails (e.g., branch was force-deleted externally), the subsequentgit reset --hardwould fail with a confusing error.🛡️ Proposed fix: Add error handling
if [ "$EXISTING_VERSION" = "$NEW_VERSION" ]; then # Same version — update existing branch by resetting to main echo "Updating existing release branch $RELEASE_BRANCH" - git checkout "$RELEASE_BRANCH" - git reset --hard origin/main + if ! git checkout "$RELEASE_BRANCH" 2>/dev/null; then + echo "Branch $RELEASE_BRANCH not found locally, fetching..." + git fetch origin "$RELEASE_BRANCH" + git checkout "$RELEASE_BRANCH" + fi + git reset --hard origin/main || { echo "Failed to reset branch"; exit 1; }
🤖 Fix all issues with AI agents
In @.github/workflows/prepare-release.yml:
- Around line 21-32: Add a concurrency group to the "create-release-pr" workflow
job to prevent parallel runs from racing when multiple candidate-release/release
PRs merge; specifically, add a top-level concurrency stanza for the job
(referencing the job name create-release-pr) with a unique group key that
includes the repository and branch/PR context (e.g. using github.repository and
github.ref or github.event.pull_request.head.ref) and set cancel-in-progress:
false so queued runs run sequentially instead of being canceled.
| # Create/update release PR when a PR with candidate-release or release label is merged | ||
| create-release-pr: | ||
| name: Create Release PR | ||
| if: | | ||
| github.event.action == 'closed' && | ||
| github.event.pull_request.merged == true && | ||
| contains(github.event.pull_request.labels.*.name, 'candidate-release') && | ||
| ( | ||
| contains(github.event.pull_request.labels.*.name, 'candidate-release') || | ||
| contains(github.event.pull_request.labels.*.name, 'release') | ||
| ) && | ||
| !startsWith(github.event.pull_request.head.ref, 'release/') | ||
| runs-on: ubuntu-latest |
There was a problem hiding this comment.
Consider adding concurrency control to prevent race conditions.
If two candidate-release or release PRs merge in quick succession, parallel workflow runs could race to create or update the release branch/PR, potentially causing conflicts or duplicate PRs.
🛡️ Proposed fix: Add concurrency group
create-release-pr:
name: Create Release PR
+ concurrency:
+ group: release-pr-creation
+ cancel-in-progress: false
if: |
github.event.action == 'closed' &&Using cancel-in-progress: false ensures queued runs complete sequentially rather than being cancelled, so no release changes are lost.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Create/update release PR when a PR with candidate-release or release label is merged | |
| create-release-pr: | |
| name: Create Release PR | |
| if: | | |
| github.event.action == 'closed' && | |
| github.event.pull_request.merged == true && | |
| contains(github.event.pull_request.labels.*.name, 'candidate-release') && | |
| ( | |
| contains(github.event.pull_request.labels.*.name, 'candidate-release') || | |
| contains(github.event.pull_request.labels.*.name, 'release') | |
| ) && | |
| !startsWith(github.event.pull_request.head.ref, 'release/') | |
| runs-on: ubuntu-latest | |
| # Create/update release PR when a PR with candidate-release or release label is merged | |
| create-release-pr: | |
| name: Create Release PR | |
| concurrency: | |
| group: release-pr-creation | |
| cancel-in-progress: false | |
| if: | | |
| github.event.action == 'closed' && | |
| github.event.pull_request.merged == true && | |
| ( | |
| contains(github.event.pull_request.labels.*.name, 'candidate-release') || | |
| contains(github.event.pull_request.labels.*.name, 'release') | |
| ) && | |
| !startsWith(github.event.pull_request.head.ref, 'release/') | |
| runs-on: ubuntu-latest |
🤖 Prompt for AI Agents
In @.github/workflows/prepare-release.yml around lines 21 - 32, Add a
concurrency group to the "create-release-pr" workflow job to prevent parallel
runs from racing when multiple candidate-release/release PRs merge;
specifically, add a top-level concurrency stanza for the job (referencing the
job name create-release-pr) with a unique group key that includes the repository
and branch/PR context (e.g. using github.repository and github.ref or
github.event.pull_request.head.ref) and set cancel-in-progress: false so queued
runs run sequentially instead of being canceled.
- Loop HTML tag removal to prevent incomplete sanitization bypass - Replace TOCTOU existsSync+readFileSync with try/catch - Remove unused fullTask variable in context-builder Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove statSync+readFileSync TOCTOU pattern entirely. Instead of checking file type then reading, try reading directly and catch errors. This eliminates the race window between stat and read. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/loop/executor.ts`:
- Around line 848-852: The loop currently appends compressed validation feedback
to taskWithSkills (via compressValidationFeedback) causing unbounded
accumulation across iterations; instead, avoid mutating taskWithSkills between
iterations—either rebuild taskWithSkills from the original base/context each
loop before adding fresh iteration info, or (preferred) stop appending and pass
the latest compressed feedback into buildIterationContext via its
validationFeedback parameter (use compressValidationFeedback(feedback) and
supply that to buildIterationContext) so feedback is handled per-iteration and
does not accumulate.
|
|
||
| // Continue loop with validation feedback | ||
| taskWithSkills = `${taskWithSkills}\n\n${feedback}`; | ||
| // Continue loop with compressed validation feedback | ||
| const compressedFeedback = compressValidationFeedback(feedback); | ||
| taskWithSkills = `${taskWithSkills}\n\n${compressedFeedback}`; | ||
| continue; // Go to next iteration to fix issues |
There was a problem hiding this comment.
Validation feedback accumulates across iterations.
Line 851 appends compressed validation feedback to taskWithSkills, which persists across loop iterations. If multiple consecutive iterations fail validation, feedback accumulates unboundedly:
- Iteration 2 fails → append feedback
- Iteration 3 fails → append more feedback
- ...and so on
This can cause growing context size despite compression, potentially exceeding token budgets or degrading agent performance with redundant information.
🛡️ Proposed fix: Reset taskWithSkills or use buildIterationContext's validationFeedback param
Option A: Rebuild taskWithSkills each iteration (prevents accumulation):
+ // Rebuild task with skills and fresh validation feedback
+ let iterationTaskWithSkills = taskWithSkills;
// Continue loop with compressed validation feedback
const compressedFeedback = compressValidationFeedback(feedback);
- taskWithSkills = `${taskWithSkills}\n\n${compressedFeedback}`;
+ iterationTaskWithSkills = `${taskWithSkills}\n\n${compressedFeedback}`;Option B: Pass latest feedback through buildIterationContext's validationFeedback param instead of mutating taskWithSkills (leverages the builder's compression logic).
🤖 Prompt for AI Agents
In `@src/loop/executor.ts` around lines 848 - 852, The loop currently appends
compressed validation feedback to taskWithSkills (via
compressValidationFeedback) causing unbounded accumulation across iterations;
instead, avoid mutating taskWithSkills between iterations—either rebuild
taskWithSkills from the original base/context each loop before adding fresh
iteration info, or (preferred) stop appending and pass the latest compressed
feedback into buildIterationContext via its validationFeedback parameter (use
compressValidationFeedback(feedback) and supply that to buildIterationContext)
so feedback is handled per-iteration and does not accumulate.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/loop/skills.ts (1)
139-184:⚠️ Potential issue | 🟡 MinorMinor: Duplicate skill names possible across multiple
skills.shfiles.The deduplication at line 170 only checks against skills added from the current
skills.shfile's parsing session. If the same skill name appears via# Skill:comment in oneskills.shand vianpx add-skillin another, both will be added.Consider deduplicating against the entire
skillsarray (including skills from directory sources) if uniqueness across all sources is expected:- if (!skills.some((s) => s.name === name)) { + if (!skills.some((s) => s.name.toLowerCase() === name.toLowerCase())) {Also note: the
# Skill:regex should probably trim the captured name to avoid trailing whitespace issues.🛡️ Optional fix to trim captured skill name
- const name = match.replace(/# Skill: /i, '').trim(); + const name = match.replace(/^# Skill:\s*/i, '').trim();
🤖 Fix all issues with AI agents
In `@src/loop/skills.ts`:
- Around line 273-276: findSkill currently rescans with detectClaudeSkills(cwd)
on every call causing redundant I/O; change findSkill to accept either a
pre-fetched ClaudeSkill[] or a cwd string (e.g., add overloads for
findSkill(skills: ClaudeSkill[], name: string) and findSkill(cwd: string, name:
string)) and implement a single body that uses the provided skills array when
given, falling back to detectClaudeSkills(cwd) only if a string cwd is passed;
reference the detectClaudeSkills and findSkill symbols when updating signatures
so callers can pass a cached array to avoid repeated rescans.
- Around line 16-30: The current parseFrontmatter function uses regex to extract
single-line name/description and misses YAML block scalars (multi-line values);
update parseFrontmatter to extract the --- delimited block, parse it with a real
YAML parser (e.g., yaml.parse) and return name and description from the parsed
object (preserving multi-line strings), or if you prefer not to add a
dependency, document the limitation in parseFrontmatter's JSDoc and explicitly
strip/block scalar handling; locate the function parseFrontmatter to implement
the change and ensure it still returns { name?: string; description?: string }
when fields are absent.
| function parseFrontmatter(content: string): { name?: string; description?: string } { | ||
| const match = content.match(/^---\s*\n([\s\S]*?)\n---/); | ||
| if (!match) return {}; | ||
|
|
||
| const yaml = match[1]; | ||
| const result: { name?: string; description?: string } = {}; | ||
|
|
||
| const nameMatch = yaml.match(/^name:\s*(.+)$/m); | ||
| if (nameMatch) result.name = nameMatch[1].trim().replace(/^['"]|['"]$/g, ''); | ||
|
|
||
| const descMatch = yaml.match(/^description:\s*(.+)$/m); | ||
| if (descMatch) result.description = descMatch[1].trim().replace(/^['"]|['"]$/g, ''); | ||
|
|
||
| return result; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Simplified YAML parser handles common cases but misses multi-line values.
The regex-based frontmatter parser works well for single-line name: and description: fields. However, YAML block scalars (multi-line strings using | or >) won't be captured:
---
name: my-skill
description: |
This is a multi-line
description that won't parse
---For a CLI tool's skill metadata, this is likely fine since complex descriptions are rare. If you anticipate richer frontmatter, consider a lightweight YAML parser like yaml or document this limitation.
🤖 Prompt for AI Agents
In `@src/loop/skills.ts` around lines 16 - 30, The current parseFrontmatter
function uses regex to extract single-line name/description and misses YAML
block scalars (multi-line values); update parseFrontmatter to extract the ---
delimited block, parse it with a real YAML parser (e.g., yaml.parse) and return
name and description from the parsed object (preserving multi-line strings), or
if you prefer not to add a dependency, document the limitation in
parseFrontmatter's JSDoc and explicitly strip/block scalar handling; locate the
function parseFrontmatter to implement the change and ensure it still returns {
name?: string; description?: string } when fields are absent.
| export function findSkill(cwd: string, name: string): ClaudeSkill | undefined { | ||
| const skills = detectClaudeSkills(cwd); | ||
| return skills.find((s) => s.name.toLowerCase() === name.toLowerCase()); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider caching if findSkill is called repeatedly.
Each call to findSkill triggers a full detectClaudeSkills rescan of all directories. For single CLI invocations this is fine, but if this gets called in a loop (e.g., validating multiple skill references), the repeated I/O could add latency.
If the caller controls the context, consider passing a pre-fetched skills array:
export function findSkill(cwd: string, name: string): ClaudeSkill | undefined;
export function findSkill(skills: ClaudeSkill[], name: string): ClaudeSkill | undefined;
export function findSkill(cwdOrSkills: string | ClaudeSkill[], name: string): ClaudeSkill | undefined {
const skills = typeof cwdOrSkills === 'string'
? detectClaudeSkills(cwdOrSkills)
: cwdOrSkills;
return skills.find((s) => s.name.toLowerCase() === name.toLowerCase());
}🤖 Prompt for AI Agents
In `@src/loop/skills.ts` around lines 273 - 276, findSkill currently rescans with
detectClaudeSkills(cwd) on every call causing redundant I/O; change findSkill to
accept either a pre-fetched ClaudeSkill[] or a cwd string (e.g., add overloads
for findSkill(skills: ClaudeSkill[], name: string) and findSkill(cwd: string,
name: string)) and implement a single body that uses the provided skills array
when given, falling back to detectClaudeSkills(cwd) only if a string cwd is
passed; reference the detectClaudeSkills and findSkill symbols when updating
signatures so callers can pass a cached array to avoid repeated rescans.
The --force-with-lease push fails when the automation branch already exists remotely because the local checkout has no knowledge of the remote ref. Adding a fetch first resolves the stale info rejection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This reverts commit 73dfbcc.
|
Replaced by new PR from renamed branch staging/v0.2.0 |
Summary
Consolidates 9 PRs into a single release branch for the upcoming stable release. All features have been tested together — build passes, 143/143 tests pass.
What's Included
New Features
--batchflag forralph-starter autosubmits tasks via Batch API for 50% cost reduction. Includes polling with exponential backoffralph-starter pauseandralph-starter resumecommands for graceful rate limit recoveryralph_list_presets,ralph_fetch_spec), prompts (figma_to_code,batch_issues), activity log as MCP resource, version synced with package.json.agents/skillsdirectory support, andskill infocommandBug Fixes
taskCustomIdhelper, pricing caveat for non-Sonnet modelsPRs Merged
Test Plan
pnpm buildpassespnpm test:run— 143/143 tests passralph-starter --help,ralph-starter presets)ralph-starter mcp)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements