Version: 1.1 Last Updated: 2026-01-26 Category: Reference
New to Babysitter? These are the most common questions from first-time users:
| Question | Quick Answer |
|---|---|
| What does Babysitter actually do? | It automates the "try → test → fix → repeat" loop until your code meets quality targets |
| How do I start? | Just type /babysitter:call build a login page in Claude Code |
| Do I need to write code? | No - you use natural language. Babysitter handles the rest |
| What if something goes wrong? | Everything is saved automatically. You can always resume or debug |
| Is it free? | Babysitter is included with Claude Code - no additional cost |
Still have questions? Browse the full FAQ below, or check the Troubleshooting guide.
- Getting Started
- Installation and Setup
- Using Babysitter
- Breakpoints and Approval
- Quality Convergence
- Sessions and Resumption
- Process Definitions
- Performance and Optimization
- Security and Compliance
- Troubleshooting
Babysitter is an event-sourced orchestration framework for Claude Code that enables deterministic, resumable, and human-in-the-loop workflow management. It allows you to build complex, multi-step development processes with built-in quality gates, human approval checkpoints, and automatic iteration until quality targets are met.
Key features:
- Structured multi-step workflows
- Human approval checkpoints (breakpoints)
- Iterative quality convergence
- Complete audit trails
- Session persistence and resumability
See: README
| Feature | Regular Claude Code | With Babysitter |
|---|---|---|
| Session persistence | Lost on restart | Event-sourced, resumable |
| Quality iteration | Manual prompting | Automated convergence |
| Approval gates | Chat-based | Structured breakpoints |
| Parallel execution | Sequential only | Built-in parallelism |
| Audit trail | Chat history | Structured journal |
Babysitter adds orchestration capabilities, enabling deterministic workflows with full traceability.
No. You interact with Babysitter using natural language. Simply ask Claude to use the babysitter skill:
Use the babysitter skill to implement user authentication with TDD
Or use the slash command:
/babysitter:call implement user authentication with TDD
However, creating custom process definitions does require JavaScript/TypeScript knowledge.
See: Getting Started
Babysitter is specifically designed for Claude Code. The orchestration framework integrates with Claude Code's plugin system and skill infrastructure. While the underlying concepts could be adapted, Babysitter is not currently directly compatible with other AI coding assistants.
Babysitter excels at:
- Feature development with TDD and quality gates
- Code refactoring with iterative improvement
- Multi-phase workflows requiring human approval
- Complex tasks spanning multiple files or components
- Team workflows requiring audit trails and approvals
For simple, one-off tasks, using Claude Code directly may be faster.
Required:
- Node.js 20.0.0+ (recommend 22.x LTS)
- Claude Code (latest version)
- npm 8.0.0+
Optional:
- Git (for version control)
- jq (for CLI output parsing)
See: Installation Guide
Babysitter has two packages with distinct responsibilities:
- @a5c-ai/babysitter - Core package
- @a5c-ai/babysitter-sdk - Orchestration runtime, CLI, and integrated breakpoints UI
Install both:
npm install -g @a5c-ai/babysitter-sdk@latestThe .a5c/runs/ directory stores all run data:
- Light usage: 1-5 MB per run
- Heavy usage: 50-100 MB per run (with large artifacts)
Monitor disk usage:
du -sh .a5c/
du -h .a5c/runs/* | sort -hYou can safely delete old runs to reclaim space:
rm -rf .a5c/runs/<old-run-id>Update SDK packages:
npm update -g @a5c-ai/babysitter @a5c-ai/babysitter-sdkUpdate Claude Code plugin:
claude plugin marketplace update a5c.ai
claude plugin update babysitter@a5c.aiTip: Update regularly (daily or weekly) for the latest features and fixes.
Common causes and solutions:
-
Plugin not installed:
claude plugin marketplace add a5c-ai/babysitter claude plugin install --scope user babysitter@a5c.ai
-
Plugin not enabled:
claude plugin enable --scope user babysitter@a5c.ai -
Claude Code not restarted:
- Close all Claude Code windows
- Reopen Claude Code
-
Verify installation:
claude plugin list | grep babysitter
See: Installation Troubleshooting
Via natural language:
Use the babysitter skill to implement user authentication
Via slash command:
/babysitter:call implement user authentication with TDD
With options:
/babysitter:call implement user authentication --max-iterations 10
See: Quickstart
Simply close Claude Code. The run is automatically saved to the event-sourced journal and can be resumed later.
Babysitter is designed to be resumable at any point.
Via natural language:
Resume the babysitter run for the authentication feature
Via slash command with run ID:
/babysitter:call resume --run-id 01KFFTSF8TK8C9GT3YM9QYQ6WG
Find your run ID:
ls -lt .a5c/runs/ | head -10See: Run Resumption
The run ID is displayed when you start a workflow:
Run ID: 01KFFTSF8TK8C9GT3YM9QYQ6WG
Run Directory: .a5c/runs/01KFFTSF8TK8C9GT3YM9QYQ6WG/
Ask Claude to find recent runs:
What babysitter runs have I done recently?
Check run status:
What's the status of my babysitter run?
Not recommended. Running multiple babysitter instances in the same directory may cause journal conflicts.
For parallel work:
- Use separate directories
- Use separate runs for independent features
- Wait for one run to complete before starting another in the same directory
When a task fails, Babysitter:
- Records the failure in the journal
- May retry based on configuration
- Reports the error for debugging
To investigate:
babysitter run:events <runId> --filter-type RUN_FAILED --jsonTo resume after fixing:
/babysitter:call resume
-
Check the journal:
cat .a5c/runs/<runId>/journal/journal.jsonl | jq .
-
View recent events:
babysitter run:events <runId> --limit 10 --reverse --json
-
Find the error:
babysitter run:events <runId> --filter-type RUN_FAILED --json
-
Ask Claude to analyze:
Analyze the babysitter run error for <runId> and diagnose
Quality scores are assessments of code quality generated by Babysitter's agent tasks. Scores are based on:
- Test coverage
- Test quality
- Code quality metrics (lint, types)
- Security analysis
- Requirements alignment
Scores range from 0-100.
See: Quality Convergence
Include it in your prompt:
Use babysitter with TDD and 85% quality target
Or specify in process inputs:
const { targetQuality = 85, maxIterations = 5 } = inputs;Common causes:
- Target is unrealistic
- Fundamental issues blocking improvement
- Scoring criteria too strict
Solutions:
-
Review iteration feedback:
What recommendations came from my quality scoring? -
Lower the target:
/babysitter:call continue with 75% quality target -
Increase iterations:
/babysitter:call continue with max 10 iterations -
Review blocking issues: Check lint errors, test failures, etc.
Yes. Create custom agent tasks with your scoring criteria:
export const customScoringTask = defineTask('custom-scorer', (args, taskCtx) => ({
kind: 'agent',
title: 'Custom quality scoring',
agent: {
name: 'quality-assessor',
prompt: {
role: 'quality engineer',
task: 'Score based on our team standards',
instructions: [
'Your custom criteria here',
'...'
]
}
}
}));| Workflow Type | Typical Iterations | Maximum Recommended |
|---|---|---|
| Simple improvement | 2-3 | 5 |
| Feature development | 3-5 | 10 |
| Complex refactoring | 5-8 | 15 |
Always set iteration limits to prevent runaway loops.
Babysitter uses event sourcing:
- Every action is recorded in the journal
- On resume, events are replayed to rebuild state
- Completed tasks return cached results
- Execution continues from the last pending point
This makes sessions fully resumable regardless of why they ended.
See: Run Resumption
No. All progress is preserved in the journal. Resume with:
/babysitter:call resume --run-id <runId>
Yes. Runs are stored in the file system and can be continued by anyone with access:
# Developer A starts
/babysitter:call implement feature X
# Developer B continues
/babysitter:call resume the feature X workflow
Ensure you share the .a5c/ directory (e.g., via Git or shared storage).
Pending breakpoints are preserved. On resume:
- Babysitter detects the pending breakpoint
- Checks if it has been approved
- If approved, continues; if not, waits
Approve breakpoints before resuming, or resume and check the breakpoints UI.
A process definition is a JavaScript function that orchestrates workflow logic. It defines:
- What tasks to run
- In what order
- With what conditions
- Where to place breakpoints
export async function process(inputs, ctx) {
const plan = await ctx.task(planTask, { feature: inputs.feature });
await ctx.breakpoint({ question: 'Approve plan?' });
const result = await ctx.task(implementTask, { plan });
return result;
}See: Process Definitions
Not recommended. Process definitions are associated with runs at creation time. Modifying them during execution may cause unexpected behavior.
For changes, start a new run with the updated process.
| Type | Use Case | Example |
|---|---|---|
| Agent | LLM-powered tasks | Planning, scoring |
| Skill | Claude Code skills | Code analysis |
| Node | JavaScript scripts | Build, test |
| Shell | Commands | git, npm |
| Breakpoint | Human approval | Review gates |
Yes, using ctx.parallel.all():
const [coverage, lint, security] = await ctx.parallel.all([
() => ctx.task(coverageTask, {}),
() => ctx.task(lintTask, {}),
() => ctx.task(securityTask, {})
]);This significantly speeds up workflows with independent tasks.
See: Parallel Execution
| Workflow Type | Expected Duration |
|---|---|
| Simple build & test | 30s - 2m |
| TDD feature | 3m - 10m |
| Complex refactoring | 10m - 30m |
| Full application | 30m - 2h |
Duration depends on iteration count, task complexity, and API latency.
-
Use parallel execution:
await ctx.parallel.all([task1, task2, task3]);
-
Set iteration limits:
Use babysitter with max 3 iterations -
Reduce agent task scope:
await ctx.task(analyzeTask, { files: ['specific/file.js'] });
-
Lower quality target for faster convergence
- LLM API latency - 2-5 seconds per agent call
- Iteration count - More iterations = longer runtime
- Task complexity - Large codebases take longer
- Parallel vs sequential - Parallel can be 4x faster
Ask Claude for updates:
What's the current progress of my babysitter run?
Show me the recent events in my workflow
How many iterations have completed?
Agent tasks use Claude's API, which means:
- Code context is sent to the API for analysis
- No data is stored by the API beyond the session
- Review Anthropic's privacy policy for details
For sensitive code, consider:
- Using shell/node tasks instead of agent tasks
- Running analysis locally
- Reviewing what context is sent to agents
Best practices:
-
Use environment variables:
const apiKey = process.env.API_KEY;
-
Never hardcode credentials
-
Add
.a5c/to.gitignore -
Review journal before sharing:
grep -i "password\|secret\|key" .a5c/runs/*/journal/*.json
Yes. The journal system records:
- Every task execution
- Every breakpoint approval/rejection
- Every state change
- Complete timestamps
Export audit trail:
jq '.' .a5c/runs/<runId>/journal/*.json > audit-report.jsonSee: Journal System
Generally, no. Add to .gitignore:
.a5c/
Reasons:
- May contain sensitive data
- Can grow large
- State cache is derived, not source
However, you may choose to commit journals for audit purposes if they don't contain sensitive information.
Ask Claude to show you the relevant information:
-
Journal events:
Show me the events in my babysitter run -
Task outputs:
What was the result of the last task in my workflow? -
Run state:
What's the current state of my babysitter run?
Likely cause: A breakpoint is pending approval.
Solution:
- Check breakpoints UI: http://localhost:3184
- Review and approve/reject the breakpoint
- Resume if needed
Verify:
Are there any pending breakpoints in my babysitter run?
The breakpoints UI is integrated into the SDK and starts automatically when a workflow reaches a breakpoint.
Check if accessible:
curl http://localhost:3184/healthIf not accessible:
- Ensure a workflow with breakpoints is running
- The UI starts automatically when a breakpoint is reached
- Check if another process is using port 3184:
lsof -i :3184
If port is in use: Kill the conflicting process or configure a different port in your SDK settings.
-
Gather information:
- OS and version
- Node.js version
- Claude Code version
- Babysitter SDK version
- Full error message
- Relevant journal excerpts
-
Search existing issues: GitHub Issues
-
Create a new issue: Include all gathered information and steps to reproduce.
- Troubleshooting Guide - Detailed problem-solution reference
- Error Catalog - Common error messages explained
- GitHub Issues - Report bugs
- GitHub Discussions - Ask questions
Document Status: Complete Last Updated: 2026-01-25