| name | description | trigger | prerequisite | NOT_skip_when | sequence | related | verification | examples | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ring:dev-cycle |
Main orchestrator for the 10-gate development cycle system. Loads tasks/subtasks
from PM team output and executes through implementation → devops → SRE → unit testing → fuzz testing → property testing → integration testing (write) → chaos testing (write) → review → validation
gates (Gates 0-9), with state persistence and metrics collection.
Gates 6-7 (integration/chaos) write and update test code per unit but only execute tests at end of cycle (deferred execution).
Multi-tenant dual-mode is implemented during Gate 0 and verified at Gate 0.5G (no separate post-cycle step).
|
- Starting a new development cycle with a task file
- Resuming an interrupted development cycle (--resume flag)
- Need structured, gate-based task execution with quality checkpoints
|
- Tasks file exists with structured subtasks
- Not already in a specific gate skill execution
- Human has not explicitly requested manual workflow
|
- "Task is simple" → Simple ≠ risk-free. Execute gates.
- "Tests already pass" → Tests ≠ review. Different concerns.
- "Time pressure" → Pressure ≠ permission. Document and proceed.
- "Already did N gates" → Sunk cost is irrelevant. Complete all gates.
|
|
|
|
|
Before any gate execution, you MUST load Ring standards:
<fetch_required> https://raw.githubusercontent.com/LerianStudio/ring/main/CLAUDE.md </fetch_required>
Fetch URL above and extract: Agent Modification Verification requirements, Anti-Rationalization Tables requirements, and Critical Rules.
<block_condition>
- WebFetch fails or returns empty
- CLAUDE.md not accessible </block_condition>
If any condition is true, STOP and report blocker. Cannot proceed without Ring standards.
The development cycle orchestrator loads tasks/subtasks from PM team output (or manual task files) and executes through 11 gates (Gate 0–9, including 0.5 Delivery Verification) with deferred execution for infrastructure-dependent tests:
- Gates 0-5, 8-9 (per unit): Write code + run tests per task/subtask
- Gates 6-7 (per unit): Write/update integration and chaos test code, verify compilation, but do not execute tests (no containers)
- Gates 6-7 (end of cycle): Execute all integration and chaos tests once after all units complete
This keeps test code current with each feature while avoiding redundant container spin-ups during development.
MUST announce at start: "I'm using the ring:dev-cycle skill to orchestrate task execution through 11 gates (Gate 0–9, including 0.5 Delivery Verification). Gates 6-7 write tests per unit but execute at end of cycle."
See shared-patterns/shared-orchestrator-principle.md for full ORCHESTRATOR principle, role separation, forbidden/required actions, gate-to-agent mapping, and anti-rationalization table.
Summary: You orchestrate. Agents execute. If using Read/Write/Edit/Bash on source code → STOP. Dispatch agent.
This section defines exactly what the orchestrator CAN and CANNOT do.
| Action | Tool | Purpose |
|---|---|---|
| Read task files | Read |
Load task definitions from docs/pre-dev/*/tasks.md or docs/ring:dev-refactor/*/tasks.md |
| Read state files | Read |
Load/verify docs/ring:dev-cycle/current-cycle.json or docs/ring:dev-refactor/current-cycle.json |
| Read PROJECT_RULES.md | Read |
Load project-specific rules |
| Write state files | Write |
Persist cycle state to JSON |
| Track progress | TodoWrite |
Maintain task list |
| Dispatch agents | Task |
Send work to specialist agents |
| Ask user questions | AskUserQuestion |
Get execution mode, approvals |
| WebFetch standards | WebFetch |
Load Ring standards |
Any of these actions by orchestrator = IMMEDIATE VIOLATION. Dispatch agent instead.
If a task requires editing MORE than 3 files → MUST dispatch specialist agent.
This is not negotiable:
- 1-3 files of non-source content (markdown, json, yaml) → Orchestrator MAY edit directly
- 1+ source code files (
*.go,*.ts,*.tsx) → MUST dispatch agent - 4+ files of any type → MUST dispatch agent
┌─────────────────────────────────────────────────────────────────┐
│ CORRECT WORKFLOW ORDER │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Load task file (Read docs/pre-dev/*/tasks.md or docs/ring:dev-refactor/*/tasks.md) │
│ 2. Ask execution mode (AskUserQuestion) │
│ 3. Determine state path + Check/Load state (see State Path Selection) │
│ 4. WebFetch Ring Standards │
│ 5. ⛔ LOAD SUB-SKILL for current gate (Skill tool) │
│ 6. Execute sub-skill instructions (dispatch agent via Task) │
│ 7. Wait for agent completion │
│ 8. Verify agent output (Standards Coverage Table) │
│ 9. Update state (Write to JSON) │
│ 10. Proceed to next gate │
│ │
│ ════════════════════════════════════════════════════════════ │
│ ❌ WRONG: Load → Mode → Standards → Task(agent) directly │
│ ✅ RIGHT: Load → Mode → Standards → Skill(sub) → Task(agent) │
│ ════════════════════════════════════════════════════════════ │
└─────────────────────────────────────────────────────────────────┘
Before dispatching any agent, you MUST load the corresponding sub-skill first.
<cannot_skip>
- Gate 0:
Skill("ring:dev-implementation")→ thenTask(subagent_type="ring:backend-engineer-*", ...) - Gate 0.5:
Skill("ring:dev-delivery-verification")→ Verify all requirements are DELIVERED (not just created). Catches dead code, unwired structs, unregistered middleware. Also runs 7 automated checks: (A) file size ≤300 lines, (B) license headers, (C) linting, (D) migration safety, (E) vulnerability scanning, (F) API backward compatibility, (G) multi-tenant dual-mode. FAIL → return to Gate 0 with explicit fix instructions. - Gate 1:
Skill("ring:dev-devops")→ thenTask(subagent_type="ring:devops-engineer", ...) - Gate 2:
Skill("ring:dev-sre")→ thenTask(subagent_type="ring:sre", ...) - Gate 3:
Skill("ring:dev-unit-testing")→ thenTask(subagent_type="ring:qa-analyst", test_mode="unit", ...) - Gate 4:
Skill("ring:dev-fuzz-testing")→ thenTask(subagent_type="ring:qa-analyst", test_mode="fuzz", ...) - Gate 5:
Skill("ring:dev-property-testing")→ thenTask(subagent_type="ring:qa-analyst", test_mode="property", ...) - Gate 6:
Skill("ring:dev-integration-testing")→ per unit: write/update tests + compile check (no execution); end of cycle: execute - Gate 7:
Skill("ring:dev-chaos-testing")→ per unit: write/update tests + compile check (no execution); end of cycle: execute - Gate 8:
Skill("ring:requesting-code-review")→ then 5xTask(...)in parallel - Gate 9:
Skill("ring:dev-validation")→ N/A (verification only) </cannot_skip>
Between "WebFetch standards" and "Task(agent)" there MUST be "Skill(sub-skill)".
The workflow for each gate is:
1. Skill("[sub-skill-name]") ← Load sub-skill instructions
2. Follow sub-skill instructions ← Sub-skill tells you HOW to dispatch
3. Task(subagent_type=...) ← Dispatch agent as sub-skill instructs
4. Validate agent output ← Per sub-skill validation rules
5. Update state ← Record results
Validation: See shared-patterns/custom-prompt-validation.md for max length (500 chars), sanitization rules, gate protection, and conflict handling.
If custom_prompt is set in state, inject it into ALL agent dispatches:
Task tool:
subagent_type: "ring:backend-engineer-golang"
prompt: |
**CUSTOM CONTEXT (from user):**
{state.custom_prompt}
---
**Standard Instructions:**
[... rest of agent prompt ...]Rules for custom prompt:
- Inject at TOP of prompt - User context takes precedence
- Preserve in state - custom_prompt persists for resume
- Include in execution report - Document what context was used
- Forward via state - Sub-skills read
custom_promptfrom state file and inject into their agent dispatches (no explicit parameter passing needed)
Example custom prompts and their effect:
| Custom Prompt | Effect on Agents |
|---|---|
| "Focus on error handling first" | Agents prioritize error-related acceptance criteria |
| "Use existing UserRepository interface" | Agents integrate with specified interface instead of creating new |
| "Deprioritize UI polish" | Gate 3 still enforces 85% coverage, but agents deprioritize non-functional UI tweaks |
| "Prioritize observability gaps" | SRE gate gets more attention, implementation focuses on instrumentation |
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I know what the sub-skill does" | Knowledge ≠ execution. Sub-skill has iteration logic. | Load Skill() first |
| "Task() directly is faster" | Faster ≠ correct. Sub-skill has validation rules. | Load Skill() first |
| "Sub-skill just wraps Task()" | Sub-skills have retry logic, fix dispatch, validation. | Load Skill() first |
| "I'll follow the pattern manually" | Manual = error-prone. Sub-skill is the pattern. | Load Skill() first |
Between "WebFetch standards" and "Task(agent)" there MUST be "Skill(sub-skill)".
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "It's just one small file" | File count doesn't determine agent need. Language does. | DISPATCH specialist agent |
| "I already loaded the standards" | Loading standards ≠ permission to implement. Standards are for AGENTS. | DISPATCH specialist agent |
| "Agent dispatch adds overhead" | Overhead ensures compliance. Skip = skip verification. | DISPATCH specialist agent |
| "I can write Go/TypeScript" | Knowing language ≠ having Ring standards loaded. Agent has them. | DISPATCH specialist agent |
| "Just a quick fix" | "Quick" is irrelevant. all source changes require specialist. | DISPATCH specialist agent |
| "I'll read the file first to understand" | Reading source → temptation to edit. Agent reads for you. | DISPATCH specialist agent |
| "Let me check if tests pass first" | Agent runs tests in TDD cycle. You don't run tests. | DISPATCH specialist agent |
If you catch yourself doing any of these, STOP IMMEDIATELY:
🚨 RED FLAG: About to Read *.go or *.ts file
→ STOP. Dispatch agent instead.
🚨 RED FLAG: About to Write/Create source code
→ STOP. Dispatch agent instead.
🚨 RED FLAG: About to Edit source code
→ STOP. Dispatch agent instead.
🚨 RED FLAG: About to run "go test" or "npm test"
→ STOP. Agent runs tests, not you.
🚨 RED FLAG: Thinking "I'll just..."
→ STOP. "Just" is the warning word. Dispatch agent.
🚨 RED FLAG: Thinking "This is simple enough..."
→ STOP. Simplicity is irrelevant. Dispatch agent.
🚨 RED FLAG: Standards loaded, but next action is not Task tool
→ STOP. After standards, IMMEDIATELY dispatch agent.
If you violated orchestrator boundaries:
- STOP current execution immediately
- DISCARD any direct changes (
git checkout -- .) - DISPATCH the correct specialist agent
- Agent implements from scratch following TDD
- Document the violation for feedback loop
Sunk cost of direct work is IRRELEVANT. Agent dispatch is MANDATORY.
<block_condition>
- Gate Failure: Tests not passing, review failed → STOP, cannot proceed to next gate
- Missing Standards: No PROJECT_RULES.md → STOP, report blocker and wait
- Agent Failure: Specialist agent returned errors → STOP, diagnose and report
- User Decision Required: Architecture choice, framework selection → STOP, present options </block_condition>
You CANNOT proceed when blocked. Report and wait for resolution.
<cannot_skip>
- All 11 gates must execute (0→0.5→1→2→3→4→5→6→7→8→9) - Each gate catches different issues
- All testing gates (3-7) are MANDATORY - Comprehensive test coverage ensures quality
- Gates execute in order (0→0.5→1→2→3→4→5→6→7→8→9) - Dependencies exist between gates
- Gate 8 requires all 7 reviewers - Different review perspectives are complementary
- Coverage threshold ≥ 85% - Industry standard for quality code
- PROJECT_RULES.md must exist - Cannot verify standards without target </cannot_skip>
No exceptions. User cannot override. Time pressure cannot override.
| Severity | Criteria | Examples |
|---|---|---|
| CRITICAL | Blocks deployment, security risk, data loss | Gate violation, skipped mandatory step |
| HIGH | Major functionality broken, standards violation | Missing tests, wrong agent dispatched |
| MEDIUM | Code quality, maintainability issues | Incomplete documentation, minor gaps |
| LOW | Best practices, optimization | Style improvements, minor refactoring |
Report all severities. Let user prioritize.
MEDIUM issues found in Gate 4 MUST be fixed. No exceptions.
| Request | Why It's WRONG | Required Action |
|---|---|---|
| "Can reviewer clarify if MEDIUM can defer?" | Reviewer already decided. MEDIUM means FIX. | Fix the issue, re-run reviewers |
| "Ask if this specific case is different" | Reviewer verdict accounts for context already. | Fix the issue, re-run reviewers |
| "Request exception for business reasons" | Reviewers know business context. Verdict is final. | Fix the issue, re-run reviewers |
Severity mapping is absolute:
- CRITICAL/HIGH/MEDIUM → Fix NOW, re-run all 7 reviewers
- LOW → Add TODO(review): comment
- Cosmetic → Add FIXME(nitpick): comment
No negotiation. No exceptions. No "special cases".
See shared-patterns/shared-pressure-resistance.md for universal pressure scenarios.
Gate-specific note: Execution mode selection affects CHECKPOINTS (user approval pauses), not GATES (quality checks). all gates execute regardless of mode.
See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations.
Gate-specific rationalizations:
| Excuse | Reality |
|---|---|
| "Automatic mode means faster" | Automatic mode skips CHECKPOINTS, not GATES. Same quality, less interruption. |
| "Automatic mode will skip review" | Automatic mode affects user approval pauses, not quality gates. all gates execute regardless. |
| "Defense in depth exists (frontend validates)" | Frontend can be bypassed. Backend is the last line. Fix at source. |
| "Backlog the Medium issue, it's documented" | Documented risk ≠ mitigated risk. Medium in Gate 4 = fix NOW, not later. |
| "Risk-based prioritization allows deferral" | Gates ARE the risk-based system. Reviewers define severity, not you. |
See shared-patterns/shared-red-flags.md for universal red flags.
If you catch yourself thinking any of those patterns, STOP immediately and return to gate execution.
The "just this once" pattern leads to complete gate erosion:
Day 1: "Skip review just this once" → Approved (precedent set)
Day 2: "Skip testing, we did it last time" → Approved (precedent extended)
Day 3: "Skip implementation checks, pattern established" → Approved (gates meaningless)
Day 4: Production incident from Day 1 code
Prevention rules:
- No incremental exceptions - Each exception becomes the new baseline
- Document every pressure - Log who requested, why, outcome
- Escalate patterns - If same pressure repeats, escalate to team lead
- Gates are binary - Complete or incomplete. No "mostly done".
A gate is COMPLETE only when all components finish successfully:
| Gate | Components Required | Partial = FAIL |
|---|---|---|
| 0.1 | TDD-RED: Failing test written + failure output captured | Test exists but no failure output = FAIL |
| 0.2 | TDD-GREEN: Implementation passes test | Code exists but test fails = FAIL |
| 0 | Both 0.1 and 0.2 complete | 0.1 done without 0.2 = FAIL |
| 1 | Dockerfile + docker-compose + .env.example | Missing any = FAIL |
| 2 | Structured JSON logs with trace correlation | Partial structured logs = FAIL |
| 3 | Unit test coverage ≥ 85% + all AC tested | 84% = FAIL |
| 4 | Fuzz tests with seed corpus ≥ 5 entries | Missing corpus = FAIL |
| 5 | Property-based tests for domain invariants | Missing property tests = FAIL |
| 6 | Integration tests with testcontainers | No testcontainers = FAIL |
| 7 | Chaos tests for failure scenarios | Missing chaos tests = FAIL |
| 8 | All 7 reviewers PASS | 6/7 reviewers = FAIL |
| 9 | Explicit "APPROVED" from user | "Looks good" = not approved |
CRITICAL for Gate 8: Running 6 of 7 reviewers is not a partial pass - it's a FAIL. Re-run all 7 reviewers.
Anti-Rationalization for Partial Gates:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "6 of 7 reviewers passed" | Gate 8 requires all 7. 6/7 = 0/7. | Re-run all 7 reviewers |
| "Gate mostly complete" | Mostly ≠ complete. Binary: done or not done. | Complete all components |
| "Can finish remaining in next cycle" | Gates don't carry over. Complete NOW. | Finish current gate |
| "Core components done, optional can wait" | No component is optional within a gate. | Complete all components |
| "Unit tests are enough, skip fuzz/property" | Each test type catches different bugs. All are MANDATORY. | Execute all testing gates (3-7) |
| "No external dependencies, skip integration" | Integration testing is MANDATORY. Write tests per unit, execute at end of cycle. | Write Gate 6 tests per unit, execute at end |
Gates MUST execute in order: 0 → 0.5 → 1 → 2 → 3 → 4 → 5 → 6(write) → 7(write) → 8 → 9. All 11 gates are MANDATORY.
Deferred Execution Model for Gates 6-7:
- Per unit: Write/update test code + verify compilation (no container execution)
- End of cycle: Execute all integration and chaos tests (containers spun up once), then verify multi-tenant dual-mode compliance (already implemented at Gate 0, verified at Gate 0.5G)
| Violation | Why It's WRONG | Consequence |
|---|---|---|
| Skip Gate 1 (DevOps) | "No infra changes" | Code without container = works on my machine only |
| Skip Gate 2 (SRE) | "Observability later" | Blind production = debugging nightmare |
| Skip Gate 4 (Fuzz) | "Unit tests are enough" | Edge cases and crashes not discovered |
| Skip Gate 5 (Property) | "Too complex" | Domain invariant violations not detected |
| Skip Gate 6 (Integration) | "No external dependencies" | Internal integration bugs surface in production |
| Skip Gate 7 (Chaos) | "Infra is reliable" | System fails under real-world conditions |
| Reorder Gates | "Review before test" | Reviewing untested code wastes reviewer time |
| Parallel Gates | "Run 3 and 4 together" | Dependencies exist. Order is intentional. |
All testing gates (3-7) are MANDATORY. No exceptions. No skip reasons.
Gates are not parallelizable across different gates. Sequential execution is MANDATORY.
| Gate | Skill | Purpose | Agent | Per Unit | Standards Module |
|---|---|---|---|---|---|
| 0 | ring:dev-implementation | Write code following TDD (single-tenant) | Based on task language/domain | Write + Run | core.md, domain.md |
| 1 | ring:dev-devops | Infrastructure and deployment | ring:devops-engineer | Write + Run | devops.md |
| 2 | ring:dev-sre | Observability (health, logging, tracing) | ring:sre | Write + Run | sre.md |
| 3 | ring:dev-unit-testing | Unit tests for acceptance criteria | ring:qa-analyst (test_mode: unit) | Write + Run | testing-unit.md |
| 4 | ring:dev-fuzz-testing | Fuzz tests for edge cases and crashes | ring:qa-analyst (test_mode: fuzz) | Write + Run | testing-fuzz.md |
| 5 | ring:dev-property-testing | Property-based tests for domain invariants | ring:qa-analyst (test_mode: property) | Write + Run | testing-property.md |
| 6 | ring:dev-integration-testing | Integration tests with testcontainers | ring:qa-analyst (test_mode: integration) | Write only | testing-integration.md |
| 7 | ring:dev-chaos-testing | Chaos tests for failure scenarios | ring:qa-analyst (test_mode: chaos) | Write only | testing-chaos.md |
| 8 | ring:requesting-code-review | Parallel code review (7 reviewers) | ring:code-reviewer, ring:business-logic-reviewer, ring:security-reviewer, ring:nil-safety-reviewer, ring:test-reviewer, ring:consequences-reviewer, ring:dead-code-reviewer | Run | N/A |
| 9 | ring:dev-validation | Final acceptance validation | N/A (verification) | Run | N/A |
All gates are MANDATORY. No exceptions. No skip reasons.
Gates 6-7 Deferred Execution: Test code is written/updated per unit to stay current. Actual test execution (with containers) happens once at end of cycle.
PM Team Output → Dev Team Execution (/ring:dev-cycle)
| Input Type | Path | Structure |
|---|---|---|
| Tasks only | docs/pre-dev/{feature}/tasks.md |
T-001, T-002, T-003 with requirements + acceptance criteria |
| Tasks + Subtasks | docs/pre-dev/{feature}/ |
tasks.md + subtasks/{task-id}/ST-XXX-01.md, ST-XXX-02.md... |
Core Principle: Each execution unit passes through all 11 gates. Gates 6-7 write test code per unit but defer execution to end of cycle.
Per-Unit Flow: Unit → Gate 0→0.5(delivery verify)→1→2→3→4→5→6(write)→7(write)→8→9 → 🔒 Unit Checkpoint → 🔒 Task Checkpoint → Next Unit End-of-Cycle Flow: All units done → Gate 6(execute)→7(execute) → Multi-Tenant Adaptation → Final Commit → Feedback
| Scenario | Execution Unit | Gates Per Unit | End of Cycle |
|---|---|---|---|
| Task without subtasks | Task itself | 11 gates (6-7 write only) | Gate 6-7 execute |
| Task with subtasks | Each subtask | 11 gates per subtask (6-7 write only) | Gate 6-7 execute |
Why deferred execution for Gates 6-7:
- Integration tests require testcontainers (slow to spin up/tear down)
- Chaos tests require Toxiproxy infrastructure
- Running containers per subtask is wasteful when subsequent subtasks modify the same code
- Test code stays current (written per unit), infrastructure cost is paid once
User selects when commits happen (Step 7 of initialization).
| Option | When Commit Happens | Use Case |
|---|---|---|
| (a) Per subtask | After each subtask passes Gate 9 | Fine-grained history, easy rollback per subtask |
| (b) Per task | After all subtasks of a task complete | Logical grouping, one commit per feature chunk |
| (c) At the end | After entire cycle completes | Single commit with all changes, clean history |
| Timing | Message Format | Example |
|---|---|---|
| Per subtask | feat({subtask_id}): {subtask_title} |
feat(ST-001-02): implement user authentication handler |
| Per task | feat({task_id}): {task_title} |
feat(T-001): implement user authentication |
| At the end | feat({cycle_id}): complete dev cycle for {feature} |
feat(cycle-abc123): complete dev cycle for auth-system |
| Execution Mode | Commit Timing | Behavior |
|---|---|---|
| Manual per subtask | Per subtask | Commit + checkpoint after each subtask |
| Manual per subtask | Per task | Checkpoint after subtask, commit after task |
| Manual per subtask | At end | Checkpoint after subtask, commit at cycle end |
| Manual per task | Per subtask | Commit after subtask, checkpoint after task |
| Manual per task | Per task | Commit + checkpoint after task |
| Manual per task | At end | Checkpoint after task, commit at cycle end |
| Automatic | Per subtask | Commit after each subtask, no checkpoints |
| Automatic | Per task | Commit after task, no checkpoints |
| Automatic | At end | Single commit at cycle end, no checkpoints |
Note: Checkpoints (user approval pauses) are controlled by execution_mode. Commits are controlled by commit_timing. They are independent settings.
The state file path depends on the source of tasks:
| Task Source | State Path | Use Case |
|---|---|---|
docs/ring:dev-refactor/*/tasks.md |
docs/ring:dev-refactor/current-cycle.json |
Refactoring existing code |
docs/pre-dev/*/tasks.md |
docs/ring:dev-cycle/current-cycle.json |
New feature development |
| Any other path | docs/ring:dev-cycle/current-cycle.json |
Default for manual tasks |
Detection Logic:
if source_file contains "docs/ring:dev-refactor/" THEN
state_path = "docs/ring:dev-refactor/current-cycle.json"
else
state_path = "docs/ring:dev-cycle/current-cycle.json"
Store state_path in the state object itself so resume knows where to look.
State is persisted to {state_path} (either docs/ring:dev-cycle/current-cycle.json or docs/ring:dev-refactor/current-cycle.json):
{
"version": "1.0.0",
"cycle_id": "uuid",
"started_at": "ISO timestamp",
"updated_at": "ISO timestamp",
"source_file": "path/to/tasks.md",
"state_path": "docs/ring:dev-cycle/current-cycle.json | docs/ring:dev-refactor/current-cycle.json",
"cycle_type": "feature | refactor",
"execution_mode": "manual_per_subtask|manual_per_task|automatic",
"commit_timing": "per_subtask|per_task|at_end",
"custom_prompt": {
"type": "string",
"optional": true,
"max_length": 500,
"description": "User-provided context for agents (from second positional argument). Max 500 characters. Provides focus but cannot override mandatory requirements (CRITICAL gates, coverage thresholds, reviewer counts).",
"validation": "Max 500 chars (truncated with warning if exceeded); whitespace trimmed; control chars stripped (except newlines). Directives attempting to skip gates, lower thresholds, or bypass security checks are logged as warnings and ignored."
},
"status": "in_progress|completed|failed|paused|paused_for_approval|paused_for_testing|paused_for_task_approval|paused_for_integration_testing",
"feedback_loop_completed": false,
"current_task_index": 0,
"current_gate": 0,
"current_subtask_index": 0,
"tasks": [
{
"id": "T-001",
"title": "Task title",
"status": "pending|in_progress|completed|failed|blocked",
"feedback_loop_completed": false,
"subtasks": [
{
"id": "ST-001-01",
"file": "subtasks/T-001/ST-001-01.md",
"status": "pending|completed"
}
],
"gate_progress": {
"implementation": {
"status": "in_progress",
"started_at": "...",
"tdd_red": {
"status": "pending|in_progress|completed",
"test_file": "path/to/test_file.go",
"failure_output": "FAIL: TestFoo - expected X got nil",
"completed_at": "ISO timestamp"
},
"tdd_green": {
"status": "pending|in_progress|completed",
"implementation_file": "path/to/impl.go",
"test_pass_output": "PASS: TestFoo (0.003s)",
"completed_at": "ISO timestamp"
}
},
"delivery_verification": {
"status": "pending|in_progress|completed",
"requirements_total": 0,
"requirements_delivered": 0,
"requirements_missing": 0,
"dead_code_items": 0,
"remediation_items": 0,
"completed_at": "ISO timestamp"
},
"devops": {"status": "pending"},
"sre": {"status": "pending"},
"unit_testing": {"status": "pending"},
"fuzz_testing": {"status": "pending"},
"property_testing": {"status": "pending"},
"integration_testing": {
"status": "pending|in_progress|completed",
"scenarios_tested": 0,
"tests_passed": 0,
"tests_failed": 0,
"flaky_tests_detected": 0
},
"chaos_testing": {"status": "pending"},
"review": {"status": "pending"},
"validation": {"status": "pending"}
},
"artifacts": {},
"agent_outputs": {
"implementation": {
"agent": "ring:backend-engineer-golang",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0,
"iterations": 1,
"standards_compliance": {
"total_sections": 15,
"compliant": 14,
"not_applicable": 1,
"non_compliant": 0,
"gaps": []
}
},
"devops": {
"agent": "ring:devops-engineer",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0,
"iterations": 1,
"artifacts_created": ["Dockerfile", "docker-compose.yml", ".env.example"],
"verification_errors": [],
"standards_compliance": {
"total_sections": 8,
"compliant": 8,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"sre": {
"agent": "ring:sre",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0,
"iterations": 1,
"instrumentation_coverage": "92%",
"validation_errors": [],
"standards_compliance": {
"total_sections": 10,
"compliant": 10,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"unit_testing": {
"agent": "ring:qa-analyst",
"test_mode": "unit",
"output": "## Summary\n...",
"verdict": "PASS",
"coverage_actual": 87.5,
"coverage_threshold": 85,
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"failures": [],
"uncovered_criteria": [],
"standards_compliance": {
"total_sections": 6,
"compliant": 6,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"fuzz_testing": {
"agent": "ring:qa-analyst",
"test_mode": "fuzz",
"output": "## Summary\n...",
"verdict": "PASS",
"corpus_entries": 5,
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"standards_compliance": {
"total_sections": 5,
"compliant": 5,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"property_testing": {
"agent": "ring:qa-analyst",
"test_mode": "property",
"output": "## Summary\n...",
"verdict": "PASS",
"properties_tested": 3,
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"standards_compliance": {
"total_sections": 5,
"compliant": 5,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"integration_testing": {
"agent": "ring:qa-analyst",
"test_mode": "integration",
"output": "## Summary\n...",
"verdict": "PASS",
"scenarios_tested": 5,
"tests_passed": 5,
"tests_failed": 0,
"flaky_tests_detected": 0,
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"standards_compliance": {
"total_sections": 10,
"compliant": 10,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"chaos_testing": {
"agent": "ring:qa-analyst",
"test_mode": "chaos",
"output": "## Summary\n...",
"verdict": "PASS",
"failure_scenarios_tested": 4,
"recovery_verified": true,
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"standards_compliance": {
"total_sections": 5,
"compliant": 5,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"review": {
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"code_reviewer": {
"agent": "ring:code-reviewer",
"output": "...",
"verdict": "PASS",
"timestamp": "...",
"issues": [],
"standards_compliance": {
"total_sections": 12,
"compliant": 12,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"business_logic_reviewer": {
"agent": "ring:business-logic-reviewer",
"output": "...",
"verdict": "PASS",
"timestamp": "...",
"issues": [],
"standards_compliance": {
"total_sections": 8,
"compliant": 8,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"security_reviewer": {
"agent": "ring:security-reviewer",
"output": "...",
"verdict": "PASS",
"timestamp": "...",
"issues": [],
"standards_compliance": {
"total_sections": 10,
"compliant": 10,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
}
},
"validation": {
"result": "approved|rejected",
"timestamp": "ISO timestamp"
}
}
}
],
"metrics": {
"total_duration_ms": 0,
"gate_durations": {},
"review_iterations": 0,
"testing_iterations": 0
}
}These schemas enable ring:dev-feedback-loop to analyze issues without parsing raw output.
{
"section": "Error Handling (MANDATORY)",
"status": "❌",
"reason": "Missing error wrapping with context",
"file": "internal/handler/user.go",
"line": 45,
"evidence": "return err // should wrap with additional context"
}{
"test_name": "TestUserCreate_InvalidEmail",
"test_file": "internal/handler/user_test.go",
"error_type": "assertion",
"expected": "ErrInvalidEmail",
"actual": "nil",
"message": "Expected validation error for invalid email format",
"stack_trace": "user_test.go:42 → user.go:28"
}{
"severity": "MEDIUM",
"category": "error-handling",
"description": "Error not wrapped with context before returning",
"file": "internal/handler/user.go",
"line": 45,
"suggestion": "Use fmt.Errorf(\"failed to create user: %w\", err)",
"fixed": false,
"fixed_in_iteration": null
}{
"check": "docker_build",
"status": "FAIL",
"error": "COPY failed: file not found in build context: go.sum",
"suggestion": "Ensure go.sum exists and is not in .dockerignore"
}{
"check": "structured_logging",
"status": "FAIL",
"file": "internal/handler/user.go",
"line": 32,
"error": "Using fmt.Printf instead of structured logger",
"suggestion": "Use logger.Info().Str(\"user_id\", id).Msg(\"user created\")"
}Each gate MUST populate its structured fields when saving to state:
| Gate | Fields to Populate |
|---|---|
| Gate 0 (Implementation) | standards_compliance (total, compliant, gaps[]) |
| Gate 1 (DevOps) | standards_compliance + verification_errors[] |
| Gate 2 (SRE) | standards_compliance + validation_errors[] |
| Gate 3 (Unit Testing) | standards_compliance + failures[] + uncovered_criteria[] |
| Gate 4 (Fuzz Testing) | standards_compliance + corpus_entries |
| Gate 5 (Property Testing) | standards_compliance + properties_tested |
| Gate 6 (Integration Testing) | standards_compliance + scenarios_tested + tests_passed + tests_failed + flaky_tests_detected |
| Gate 7 (Chaos Testing) | standards_compliance + failure_scenarios_tested + recovery_verified |
| Gate 8 (Review) | standards_compliance per reviewer + issues[] per reviewer |
All gates track standards_compliance:
total_sections: Count from agent's standards file (via standards-coverage-table.md)compliant: Sections marked ✅ in Standards Coverage Tablenot_applicable: Sections marked N/Anon_compliant: Sections marked ❌ (MUST be 0 to pass gate)gaps[]: Detailed info for each ❌ section (even if later fixed)
Empty arrays [] indicate no issues found - this is valid data for feedback-loop.
"Update state" means BOTH update the object and write to file. Not just in-memory.
You MUST execute these steps after completing any gate (0, 1, 2, 3, 4, or 5):
# Step 1: Update state object with gate results
state.tasks[current_task_index].gate_progress.[gate_name].status = "completed"
state.tasks[current_task_index].gate_progress.[gate_name].completed_at = "[ISO timestamp]"
state.current_gate = [next_gate_number]
state.updated_at = "[ISO timestamp]"
# Step 2: Write to file (MANDATORY - use Write tool)
Write tool:
file_path: [state.state_path] # Use state_path from state object
content: [full JSON state]
# Step 3: Verify persistence (MANDATORY - use Read tool)
Read tool:
file_path: [state.state_path] # Use state_path from state object
# Confirm current_gate and gate_progress match expected values| Checkpoint | MUST Update | MUST Write File |
|---|---|---|
| Before Gate 0 (task start) | task.status = "in_progress" in JSON + tasks.md Status → 🔄 Doing |
✅ YES |
| Gate 0.1 (TDD-RED) | tdd_red.status, tdd_red.failure_output |
✅ YES |
| Gate 0.2 (TDD-GREEN) | tdd_green.status, implementation.status |
✅ YES |
| Gate 0.5 (Delivery Verification) | delivery_verification.status, delivery_verification.requirements_total, delivery_verification.requirements_delivered, delivery_verification.dead_code_items |
✅ YES |
| Gate 1 (DevOps) | devops.status, agent_outputs.devops |
✅ YES |
| Gate 2 (SRE) | sre.status, agent_outputs.sre |
✅ YES |
| Gate 3 (Unit Testing) | unit_testing.status, agent_outputs.unit_testing |
✅ YES |
| Gate 4 (Fuzz Testing) | fuzz_testing.status, agent_outputs.fuzz_testing |
✅ YES |
| Gate 5 (Property Testing) | property_testing.status, agent_outputs.property_testing |
✅ YES |
| Gate 6 (Integration Testing) | integration_testing.status, agent_outputs.integration_testing |
✅ YES |
| Gate 7 (Chaos Testing) | chaos_testing.status, agent_outputs.chaos_testing |
✅ YES |
| Gate 8 (Review) | review.status, agent_outputs.review |
✅ YES |
| Gate 9 (Validation) | validation.status (execution unit only — do NOT touch task-level status here) |
✅ YES |
| Step 11.1 (Unit Approval) | status = "paused_for_approval" |
✅ YES |
| Step 11.2 (Task Approval) | task.status = "completed" in JSON + tasks.md Status → ✅ Done |
✅ YES |
| HARD BLOCK (any gate) | task.status = "failed" in JSON + tasks.md Status → ❌ Failed |
✅ YES |
tasks.md Status update rules (apply at the three checkpoints above):
If state.source_file is absent or file does not exist → log warning "tasks.md Status updates skipped: source_file missing" and skip all status updates for this cycle.
task_id = state.tasks[state.current_task_index].id
# Always the parent TASK ID — do NOT use current_subtask_index
# Rows where column 1 is "TOTAL" or empty → skip, not a task row
Use Edit tool on state.source_file (tasks.md):
- Find the row starting with `| {task_id} |` in the `## Summary` table
- Before Gate 0: replace `⏸️ Pending` with `🔄 Doing`
- If already `🔄 Doing` (resumed cycle) → skip, no change needed
- Step 11.2 (all subtasks done, user approved): replace `🔄 Doing` with `✅ Done`
- HARD BLOCK (any gate, task abandoned): replace `🔄 Doing` with `❌ Failed`
- If row shows `⏸️ Pending` (unexpected) → replace with target value anyway
- If row not found or no Status column → log warning "Status update skipped: task {task_id} row not found in {source_file}" and continue, do not abort
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I'll save state at the end" | Crash/timeout loses all progress | Save after each gate |
| "State is in memory, that's updated" | Memory is volatile. File is persistent. | Write to JSON file |
| "Only save on checkpoints" | Gates without saves = unrecoverable on resume | Save after every gate |
| "Write tool is slow" | Write takes <100ms. Lost progress takes hours. | Write after every gate |
| "I updated the state variable" | Variable ≠ file. Without Write tool, nothing persists. | Use Write tool explicitly |
After each gate, the state file MUST reflect:
current_gate= next gate numberupdated_at= recent timestamp- Previous gate
status= "completed"
If verification fails → State was not persisted. Re-execute Write tool.
NON-NEGOTIABLE. Cycle CANNOT proceed without project standards.
┌─────────────────────────────────────────────────────────────────────────────┐
│ Check: Does docs/PROJECT_RULES.md exist? │
│ │
│ ├── YES → Proceed to Step 1 (Initialize or Resume) │
│ │ │
│ └── no → ASK: "Is this a LEGACY project (created without PM workflow)?" │
│ │ │
│ ├── YES (legacy project) → LEGACY PROJECT ANALYSIS: │
│ │ Step 1: Dispatch ring:codebase-explorer (technical info only) │
│ │ Step 2: Ask 3 questions (what agent can't determine): │
│ │ 1. What do you need help with? │
│ │ 2. Any external APIs not visible in code? │
│ │ 3. Any specific technology not in Ring Standards? │
│ │ Step 3: Generate PROJECT_RULES.md (deduplicated from Ring) │
│ │ Note: Business rules belong in PRD, not in PROJECT_RULES │
│ │ → Proceed to Step 1 │
│ │ │
│ └── no (new project) → ASK: "Do you have PRD, TRD, or Feature Map?" │
│ │ │
│ ├── YES (has PM docs) → "Please provide the file path(s)" │
│ │ → Read PRD/TRD/Feature Map → Extract info │
│ │ → Generate PROJECT_RULES.md │
│ │ → Ask supplementary questions if info is incomplete │
│ │ → Save and proceed to Step 1 │
│ │ │
│ └── no (no PM docs) → ⛔ HARD BLOCK: │
│ "PM documents are REQUIRED for new projects. │
│ Run /ring:pre-dev-full or /ring:pre-dev-feature first." │
│ → STOP (cycle cannot proceed) │
└─────────────────────────────────────────────────────────────────────────────┘
# Check if file exists
Read tool:
file_path: "docs/PROJECT_RULES.md"
# If file exists and has content → Proceed to Step 1
# If file does not exist or is empty → Continue to Step 0.2Use AskUserQuestion:
┌─────────────────────────────────────────────────────────────────┐
│ 📋 PROJECT_RULES.md not FOUND │
├─────────────────────────────────────────────────────────────────┤
│ │
│ I need to create docs/PROJECT_RULES.md to understand your │
│ project's specific conventions and domain. │
│ │
│ First, I need to know: Is this a LEGACY project? │
│ │
│ A legacy project is one that was created WITHOUT using the │
│ PM team workflow (no PRD, TRD, or Feature Map documents). │
│ │
└─────────────────────────────────────────────────────────────────┘
"Is this a legacy project (created without PM team workflow)?"
(a) Yes, this is a legacy project (b) No, this is a new project following Ring workflow
Go to Step 0.2.1 (Legacy Project Analysis)
Go to Step 0.3 (Check for PM Documents)
For legacy projects, analyze codebase for TECHNICAL information only:
┌─────────────────────────────────────────────────────────────────┐
│ 📋 LEGACY PROJECT ANALYSIS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Since this is a legacy project, I'll analyze the codebase │
│ for TECHNICAL information (not business rules). │
│ │
│ Step 1: Automated analysis (ring:codebase-explorer) │
│ Step 2: Ask for project-specific tech not in Ring Standards │
│ Step 3: Generate PROJECT_RULES.md (deduplicated) │
│ │
│ Note: Business rules belong in PRD/product docs, not here. │
│ │
└─────────────────────────────────────────────────────────────────┘
⛔ You MUST use the Task tool to dispatch ring:codebase-explorer. This is not implicit.
Dispatch ring:codebase-explorer to analyze the legacy project for TECHNICAL information:
Action: Use Task tool with EXACTLY these parameters:
┌─────────────────────────────────────────────────────────────────────────────────┐
│ ⛔ If Task tool not used → Analysis does not happen → PROJECT_RULES.md INVALID │
└─────────────────────────────────────────────────────────────────────────────────┘
# Agent 1: Codebase Explorer - Technical Analysis
Task tool:
subagent_type: "ring:codebase-explorer"
description: "Analyze legacy project for PROJECT_RULES.md"
prompt: |
Analyze this LEGACY codebase to extract technical information for PROJECT_RULES.md.
This is an existing project created without PM documentation.
Your job is to understand what exists in the code.
**Extract:**
1. **Project Structure:** Directory layout, module organization
2. **Technical Stack:** Languages, frameworks, databases, external services
3. **Architecture Patterns:** Clean Architecture, MVC, microservices, etc.
4. **Existing Features:** Main modules, endpoints, capabilities
5. **Internal Libraries:** Shared packages, utilities
6. **Configuration:** Environment variables, config patterns
7. **Database:** Schema patterns, migrations, ORM usage
8. **External Integrations:** APIs consumed, message queues
**Output format:**
## Technical Analysis (Legacy Project)
### Project Overview
[What this project appears to do based on code analysis]
### Technical Stack
- Language: [detected]
- Framework: [detected]
- Database: [detected]
- External Services: [detected]
### Architecture Patterns
[Detected patterns]
### Existing Features
[List of features/modules found]
### Project Structure
[Directory layout explanation]
### Configuration
[Env vars, config files found]
### External Integrations
[APIs, services detected]
Note: Business logic analysis is not needed for PROJECT_RULES.md. Business rules belong in PRD/product docs, not technical project rules.
After agent completes, confirm:
-
ring:codebase-explorerreturned "## Technical Analysis (Legacy Project)" section - Output contains non-empty content for: Tech Stack, External Integrations, Configuration
If agent failed or returned empty output → Re-dispatch. Cannot proceed without technical analysis.
After agents complete, ask only what they couldn't determine from code:
┌─────────────────────────────────────────────────────────────────┐
│ ✓ Codebase Analysis Complete │
├─────────────────────────────────────────────────────────────────┤
│ │
│ I've analyzed your codebase. Now I need a few details that │
│ only you can provide (not visible in the code). │
│ │
└─────────────────────────────────────────────────────────────────┘
Use AskUserQuestion for each:
| # | Question | Why Agents Can't Determine This |
|---|---|---|
| 1 | What do you need help with? (Current task/feature/fix) | Future intent, not in code |
| 2 | Any external APIs or services not visible in code? (Third-party integrations planned) | Planned integrations, not yet in code |
| 3 | Any specific technology not in Ring Standards? (Message broker, cache, etc.) | Project-specific tech not in Ring |
Note: Business rules belong in PRD/product docs, not in PROJECT_RULES.md.
Create tool:
file_path: "docs/PROJECT_RULES.md"
content: |
# Project Rules
> Ring Standards apply automatically. This file documents only what Ring does not cover.
> For error handling, logging, testing, architecture, lib-commons → See Ring Standards (auto-loaded by agents)
> Generated from legacy project analysis.
## What Ring Standards Already Cover (DO not ADD HERE)
The following are defined in Ring Standards and MUST not be duplicated:
- Error handling patterns (no panic, wrap errors)
- Logging standards (structured JSON, zerolog/zap)
- Testing patterns (table-driven tests, mocks)
- Architecture patterns (Hexagonal, Clean Architecture)
- Observability (OpenTelemetry, trace correlation)
- lib-commons usage and patterns
- API directory structure
---
## Tech Stack (Not in Ring Standards)
[From ring:codebase-explorer: Technologies not covered by Ring Standards]
[e.g., specific message broker, specific cache, DB if not PostgreSQL]
| Technology | Purpose | Notes |
|------------|---------|-------|
| [detected] | [purpose] | [notes] |
## Non-Standard Directory Structure
[From ring:codebase-explorer: Directories that deviate from Ring's standard API structure]
[e.g., workers/, consumers/, polling/]
| Directory | Purpose | Pattern |
|-----------|---------|---------|
| [detected] | [purpose] | [pattern] |
## External Integrations
[From ring:codebase-explorer: Third-party services specific to this project]
| Service | Purpose | Docs |
|---------|---------|------|
| [detected] | [purpose] | [link] |
## Environment Configuration
[From ring:codebase-explorer: Project-specific env vars not covered by Ring]
| Variable | Purpose | Example |
|----------|---------|---------|
| [detected] | [purpose] | [example] |
## Domain Terminology
[From codebase analysis: Technical names used in this codebase]
| Term | Definition | Used In |
|------|------------|---------|
| [detected] | [definition] | [location] |
---
*Generated: [ISO timestamp]*
*Source: Legacy project analysis (ring:codebase-explorer)*
*Ring Standards Version: [version from WebFetch]*┌─────────────────────────────────────────────────────────────────┐
│ ✓ PROJECT_RULES.md Generated for Legacy Project │
├─────────────────────────────────────────────────────────────────┤
│ │
│ I analyzed your codebase using: │
│ • ring:codebase-explorer (technical patterns, stack, structure) │
│ │
│ Combined with your input on: │
│ • Current development goal │
│ • External integrations │
│ • Project-specific technology │
│ │
│ Generated: docs/PROJECT_RULES.md │
│ │
│ Note: Ring Standards (error handling, logging, testing, etc.) │
│ are not duplicated - agents load them automatically via WebFetch│
│ │
│ Please review the file and make any corrections needed. │
│ │
└─────────────────────────────────────────────────────────────────┘
Use AskUserQuestion:
- Question: "PROJECT_RULES.md has been generated. Would you like to review it before proceeding?"
- Options: (a) Proceed (b) Open for editing first
Proceed to Step 1
For NEW projects (not legacy), ask about PM documents:
┌─────────────────────────────────────────────────────────────────┐
│ 📋 NEW PROJECT - PM DOCUMENTS CHECK │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Since this is a new project following Ring workflow, you │
│ should have PM documents from the pre-dev workflow. │
│ │
│ Do you have any of these PM documents? │
│ • PRD (Product Requirements Document) │
│ • TRD (Technical Requirements Document) │
│ • Feature Map (from ring:pre-dev-feature-map skill) │
│ │
└─────────────────────────────────────────────────────────────────┘
"Do you have PRD, TRD, or Feature Map documents for this project?"
(a) Yes, I have PM documents (b) No, I don't have these documents
"Please provide the file path(s) to your PM documents:
- PRD path (or 'skip' if none):
- TRD path (or 'skip' if none):
- Feature Map path (or 'skip' if none): "
Typical PM team output structure:
docs/pre-dev/{feature-name}/
├── prd.md → PRD path: docs/pre-dev/auth-system/prd.md
├── trd.md → TRD path: docs/pre-dev/auth-system/trd.md
├── feature-map.md → Feature Map path: docs/pre-dev/auth-system/feature-map.md
├── api-design.md
├── data-model.md
└── tasks.md
/ring:pre-dev-fulloutput:docs/pre-dev/{feature}/prd.md,trd.md,feature-map.md/ring:pre-dev-featureoutput:docs/pre-dev/{feature}/prd.md,feature-map.md- Custom locations: User may have docs in different paths (e.g.,
requirements/,specs/)
Go to Step 0.3.1 (Generate from PM Documents)
HARD BLOCK (Step 0.3.2)
# Read PRD if provided
Read tool:
file_path: "[user-provided PRD path]"
# Read TRD if provided
Read tool:
file_path: "[user-provided TRD path]"
# Read Feature Map if provided
Read tool:
file_path: "[user-provided Feature Map path]"⛔ DEDUPLICATION RULE: Extract only what Ring Standards DO NOT cover.
| From PRD | Extract For PROJECT_RULES.md | Note |
|---|---|---|
| Domain terms, entities | Domain Terminology | Technical names only |
| External service mentions | External Integrations | Third-party APIs |
| ❌ Stays in PRD, not PROJECT_RULES | ||
| ❌ Ring Standards covers this |
| From TRD | Extract For PROJECT_RULES.md | Note |
|---|---|---|
| Tech stack not in Ring | Tech Stack (Not in Ring) | Only non-standard tech |
| External APIs | External Integrations | Third-party services |
| Non-standard directories | Non-Standard Directory Structure | Workers, consumers, etc. |
| ❌ Ring Standards covers this | ||
| ❌ Ring Standards covers this |
| From Feature Map | Extract For PROJECT_RULES.md | Note |
|---|---|---|
| Technology choices not in Ring | Tech Stack (Not in Ring) | Only if not in Ring |
| External dependencies | External Integrations | Third-party services |
| ❌ Ring Standards covers this |
Create tool:
file_path: "docs/PROJECT_RULES.md"
content: |
# Project Rules
> ⛔ IMPORTANT: Ring Standards are not automatic. Agents MUST WebFetch them before implementation.
> This file documents only project-specific information not covered by Ring Standards.
> Generated from PM documents (PRD/TRD/Feature Map).
>
> Ring Standards URLs:
> - Go: https://raw.githubusercontent.com/LerianStudio/ring/main/dev-team/docs/standards/golang.md
> - TypeScript: https://raw.githubusercontent.com/LerianStudio/ring/main/dev-team/docs/standards/typescript.md
## What Ring Standards Cover (DO not DUPLICATE HERE)
The following are defined in Ring Standards and MUST not be duplicated in this file:
- Error handling patterns (no panic, wrap errors)
- Logging standards (structured JSON via lib-commons)
- Testing patterns (table-driven tests, mocks)
- Architecture patterns (Hexagonal, Clean Architecture)
- Observability (OpenTelemetry via lib-commons)
- lib-commons / lib-common-js usage and patterns
- API directory structure (Lerian pattern)
- Database connections (PostgreSQL, MongoDB, Redis via lib-commons)
- Bootstrap pattern (config.go, service.go, server.go)
**Agents MUST WebFetch Ring Standards and output Standards Coverage Table.**
---
## Tech Stack (Not in Ring Standards)
[From TRD/Feature Map: only technologies not covered by Ring Standards]
| Technology | Purpose | Notes |
|------------|---------|-------|
| [detected] | [purpose] | [notes] |
## Non-Standard Directory Structure
[From TRD: Directories that deviate from Ring's standard API structure]
| Directory | Purpose | Pattern |
|-----------|---------|---------|
| [detected] | [purpose] | [pattern] |
## External Integrations
[From TRD/PRD: Third-party services specific to this project]
| Service | Purpose | Docs |
|---------|---------|------|
| [detected] | [purpose] | [link] |
## Environment Configuration
[From TRD: Project-specific env vars not covered by Ring]
| Variable | Purpose | Example |
|----------|---------|---------|
| [detected] | [purpose] | [example] |
## Domain Terminology
[From PRD: Technical names used in this codebase]
| Term | Definition | Used In |
|------|------------|---------|
| [detected] | [definition] | [location] |
---
*Generated from: [PRD path], [TRD path], [Feature Map path]*
*Ring Standards Version: [version from WebFetch]*
*Generated: [ISO timestamp]*If any section is empty or incomplete, ask supplementary questions:
| Missing Section | Supplementary Question |
|---|---|
| Tech Stack (Not in Ring) | "Any technology not covered by Ring Standards (message broker, cache, etc.)?" |
| External Integrations | "Any third-party APIs or external services?" |
| Domain Terminology | "What are the main entities/classes in this codebase?" |
| Non-Standard Directories | "Any directories that don't follow standard API structure (workers, consumers)?" |
Note: Do not ask about architecture, error handling, logging, testing - Ring Standards covers these.
Present to user for review, then proceed to Step 1.
┌─────────────────────────────────────────────────────────────────┐
│ ⛔ CANNOT PROCEED - PM DOCUMENTS REQUIRED │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Development cannot start without PM documents. │
│ │
│ You MUST create PRD, TRD, and/or Feature Map documents first │
│ using PM team skills: │
│ │
│ /ring:pre-dev-full → For features ≥2 days (9 gates) │
│ /ring:pre-dev-feature → For features <2 days (4 gates) │
│ │
│ These commands will guide you through creating: │
│ • PRD (Product Requirements Document) │
│ • TRD (Technical Requirements Document) │
│ • Feature Map (technology choices, feature relationships) │
│ │
│ After completing pre-dev workflow, run /ring:dev-cycle again. │
│ │
└─────────────────────────────────────────────────────────────────┘
STOP EXECUTION. Do not proceed to Step 1.
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Skip PM docs, I'll add them later" | Later = never. No PM docs = no project context = agents guessing. | Run /ring:pre-dev-full or /ring:pre-dev-feature NOW |
| "Project is simple, doesn't need PM docs" | Simple projects still need domain context defined upfront. | Create PM documents first |
| "I know what I want to build" | Your knowledge ≠ documented knowledge agents can use. | Document in PRD/TRD/Feature Map |
| "PM workflow takes too long" | PM workflow takes 30-60 min. Rework from unclear requirements takes days. | Invest time upfront |
| "Just let me start coding" | Coding without requirements = building the wrong thing. | Requirements first, code second |
| "It's legacy but I don't want to answer questions" | Legacy analysis takes ~5 min. Without it, agents have zero context. | Answer the 4 questions |
| "Legacy project is too complex to explain" | Start with high-level answers. PROJECT_RULES.md can be refined later. | Provide what you know NOW |
| User Says | Your Response |
|---|---|
| "Just skip this, I'll create PM docs later" | "PM documents are REQUIRED for new projects. Without them, agents cannot understand your project's domain context or technical requirements. Run /ring:pre-dev-full or /ring:pre-dev-feature first." |
| "I don't need formal documents" | "PM documents are the source of truth for PROJECT_RULES.md. Development cannot start without documented requirements." |
| "This is just a quick prototype" | "Even prototypes need clear requirements. /ring:pre-dev-feature takes ~30 minutes and prevents hours of rework." |
| "I already explained what I want verbally" | "Verbal explanations cannot be used by agents. Requirements MUST be documented in PRD/TRD/Feature Map files." |
| "It's a legacy project but skip the questions" | "The legacy analysis (ring:codebase-explorer + 3 questions) is the only way I can understand your project. It takes ~5 minutes and enables me to help you effectively." |
| "I'll fill in PROJECT_RULES.md myself" | "That works! Create docs/PROJECT_RULES.md with: Tech Stack (not in Ring), External Integrations, Domain Terminology. Do not duplicate Ring Standards content. Then run /ring:dev-cycle again." |
Input: Custom instructions string without a task file path
Example: /ring:dev-cycle "Add webhook notification support for account status changes"
When custom instructions are provided without a tasks file, ring:dev-cycle generates tasks internally:
- Detect instructions-only mode: No task file argument AND instructions string provided
- Analyze prompt: Extract intent, scope, and requirements from the prompt
- Explore codebase: Dispatch
ring:codebase-explorerto understand project structure - Generate tasks: Create task structure internally based on prompt + codebase analysis
Task tool:
subagent_type: "ring:codebase-explorer"
prompt: |
Analyze this codebase to support the following implementation request:
**User Request:** {prompt}
Provide:
1. Relevant files and patterns for this request
2. Suggested task breakdown (T-001, T-002, etc.)
3. Acceptance criteria for each task
4. Files that will need modification
Output as structured task list compatible with ring:dev-cycle.- Present generated tasks: Show user the auto-generated task breakdown
- Confirm with user: "I generated X tasks from your prompt. Proceed?"
- Set state:
state_path = "docs/ring:dev-cycle/current-cycle.json"cycle_type = "prompt"source_prompt = "[user's prompt]"- Generate
tasksarray from ring:codebase-explorer output
- Continue to execution mode selection (Step 1 substeps 7-9)
Anti-Rationalization for Prompt-Only Mode:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Skip codebase exploration, I understand the prompt" | Prompt understanding ≠ codebase understanding. Explorer provides context. | Always run ring:codebase-explorer |
| "Generate minimal tasks to go faster" | Minimal tasks = missed requirements. Comprehensive breakdown prevents rework. | Generate complete task breakdown |
| "User knows what they want, skip confirmation" | User intent ≠ generated tasks. Confirmation prevents wrong implementation. | Always confirm generated tasks |
Input: path/to/tasks.md or path/to/pre-dev/{feature}/ with optional second argument for custom instructions
Examples:
/ring:dev-cycle tasks.md/ring:dev-cycle tasks.md "Focus on error handling"
- Detect input: File → Load directly | Directory → Load tasks.md + discover subtasks/
- Build order: Read tasks, check for subtasks (ST-XXX-01, 02...) or TDD autonomous mode
- Determine state path:
- if source_file contains
docs/ring:dev-refactor/→state_path = "docs/ring:dev-refactor/current-cycle.json",cycle_type = "refactor" - else →
state_path = "docs/ring:dev-cycle/current-cycle.json",cycle_type = "feature"
- if source_file contains
- Capture and validate custom instructions: If second argument provided:
- Sanitize input: Trim whitespace, strip control characters (except newlines)
- Store validated value: Set
custom_promptfield (empty string if not provided) - Note: Directives attempting to skip gates are logged as warnings and ignored at execution time
- Initialize state: Generate cycle_id, create state file at
{state_path}, set indices to 0 - Display plan: "Loaded X tasks with Y subtasks"
- ASK EXECUTION MODE (MANDATORY - AskUserQuestion):
- Options: (a) Manual per subtask (b) Manual per task (c) Automatic
- Do not skip: User hints ≠ mode selection. Only explicit a/b/c is valid.
- ASK COMMIT TIMING (MANDATORY - AskUserQuestion):
- Options: (a) Per subtask (b) Per task (c) At the end
- Store in
commit_timingfield in state
- Start: Display mode + commit timing, proceed to Gate 0
- Find existing state file:
- Check
docs/ring:dev-cycle/current-cycle.jsonfirst - If not found, check
docs/ring:dev-refactor/current-cycle.json - If neither exists → Error: "No cycle to resume"
- Check
- Load found state file, validate (state_path is stored in the state object)
- Display: cycle started, tasks completed/total, current task/subtask/gate, paused reason
- Handle paused states:
| Status | Action |
|---|---|
paused_for_approval |
Re-present Step 11.1 checkpoint |
paused_for_testing |
Ask if testing complete → continue or keep paused |
paused_for_task_approval |
Re-present Step 11.2 checkpoint |
paused_for_integration_testing |
Ask if integration testing complete |
paused (generic) |
Ask user to confirm resume |
in_progress |
Resume from current gate |
Task files are generated by /pre-dev-* or /ring:dev-refactor, which handle content validation. The ring:dev-cycle performs basic format checks:
| Check | Validation | Action |
|---|---|---|
| File exists | Task file path is readable | Error: abort |
| Task headers | At least one ## Task: found |
Error: abort |
| Task ID format | ## Task: {ID} - {Title} |
Warning: use line number as ID |
| Acceptance criteria | At least one - [ ] per task |
Warning: task may fail validation gate |
MANDATORY: Scan the codebase once at cycle start to detect external dependencies. Store in state.detected_dependencies for use by Gates 2, 6, and 7.
detected_dependencies = []
1. Scan docker-compose.yml / docker-compose.yaml for service images:
- Grep tool: pattern "postgres" in docker-compose* files → add "postgres"
- Grep tool: pattern "mongo" in docker-compose* files → add "mongodb"
- Grep tool: pattern "valkey" in docker-compose* files → add "valkey"
- Grep tool: pattern "redis" in docker-compose* files → add "redis"
- Grep tool: pattern "rabbitmq" in docker-compose* files → add "rabbitmq"
2. Scan dependency manifests:
if language == "go":
- Grep tool: pattern "github.com/lib/pq" in go.mod → add "postgres"
- Grep tool: pattern "github.com/jackc/pgx" in go.mod → add "postgres"
- Grep tool: pattern "go.mongodb.org/mongo-driver" in go.mod → add "mongodb"
- Grep tool: pattern "github.com/redis/go-redis" in go.mod → add "redis"
- Grep tool: pattern "github.com/valkey-io/valkey-go" in go.mod → add "valkey"
- Grep tool: pattern "github.com/rabbitmq/amqp091-go" in go.mod → add "rabbitmq"
if language == "typescript":
- Grep tool: pattern "\"pg\"" in package.json → add "postgres"
- Grep tool: pattern "@prisma/client" in package.json → add "postgres"
- Grep tool: pattern "\"mongodb\"" in package.json → add "mongodb"
- Grep tool: pattern "\"mongoose\"" in package.json → add "mongodb"
- Grep tool: pattern "\"redis\"" in package.json → add "redis"
- Grep tool: pattern "\"ioredis\"" in package.json → add "redis"
- Grep tool: pattern "@valkey" in package.json → add "valkey"
- Grep tool: pattern "\"amqplib\"" in package.json → add "rabbitmq"
- Grep tool: pattern "amqp-connection-manager" in package.json → add "rabbitmq"
3. Deduplicate detected_dependencies
4. Store: state.detected_dependencies = detected_dependencies
5. Log: "Auto-detected external dependencies: [detected_dependencies]"
### ⛔ MANDATORY: Multi-Tenant Detection & Compliance Audit
<cannot_skip>
MANDATORY: Multi-tenant dual-mode applies to all Go services (no exceptions). Gate 0 implements dual-mode from the start using lib-commons v3 resolvers. Gate 0.5G verifies compliance. This detection captures the CURRENT state of the codebase for context.
See [multi-tenant.md](../../docs/standards/golang/multi-tenant.md) for the canonical model and [dev-delivery-verification](../dev-delivery-verification/SKILL.md) Step 3.5G for the verification checks.
</cannot_skip>
```text
6. Detect existing multi-tenant code and audit compliance:
multi_tenant_exists = false
multi_tenant_compliant = false
if language == "go":
// Phase 1: Detection
- Grep tool: pattern "MULTI_TENANT_ENABLED" in internal/ --include="*.go" → multi_tenant_exists = true
- Grep tool: pattern "tenant-manager" in go.mod → multi_tenant_exists = true
- Grep tool: pattern "TenantMiddleware\|MultiPoolMiddleware" in internal/ --include="*.go" → multi_tenant_exists = true
// Phase 2: Compliance audit (only if multi_tenant_exists = true)
if multi_tenant_exists:
compliance_checks = {}
- Grep: "TENANT_MANAGER_ADDRESS\|TENANT_URL\|TENANT_MANAGER_URL" in internal/ → if match: compliance_checks.config = "NON-COMPLIANT"
- Grep: "tmmiddleware.NewTenantMiddleware\|tmmiddleware.NewMultiPoolMiddleware" in internal/ → if no match: compliance_checks.middleware = "NON-COMPLIANT"
- Grep: "core.ResolvePostgres\|core.ResolveMongo\|core.ResolveModuleDB" in internal/ → if no match: compliance_checks.repositories = "NON-COMPLIANT"
- Grep: "WithCircuitBreaker" in internal/ → if no match: compliance_checks.circuit_breaker = "NON-COMPLIANT"
- Grep: "TestMultiTenant_BackwardCompatibility" in internal/ → if no match: compliance_checks.backward_compat = "NON-COMPLIANT"
if all checks pass: multi_tenant_compliant = true
else: multi_tenant_compliant = false
if language == "typescript":
- Grep tool: pattern "MULTI_TENANT_ENABLED" in src/ --include="*.ts" → multi_tenant_exists = true
- Grep tool: pattern "tenant-manager" in package.json → multi_tenant_exists = true
// TypeScript compliance audit follows same pattern
state.multi_tenant_exists = multi_tenant_exists
state.multi_tenant_compliant = multi_tenant_compliant
if multi_tenant_exists && multi_tenant_compliant:
Log: "Multi-tenant COMPLIANT — Gate 0 agent uses dual-mode resolvers, Gate 0.5G will verify"
elif multi_tenant_exists && !multi_tenant_compliant:
Log: "Multi-tenant DETECTED but NON-COMPLIANT — Gate 0 agent must fix, Gate 0.5G will verify"
else:
Log: "Multi-tenant NOT detected — Gate 0 agent will implement dual-mode, Gate 0.5G will verify"
MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]
<auto_detect_reason> PM team task files often omit external_dependencies. If the codebase uses postgres, mongodb, valkey, or rabbitmq, these MUST be detected and passed to Gates 6 (integration) and 7 (chaos). Auto-detection at cycle level avoids redundant scans per gate.
Multi-tenant state is detected here and passed to Gate 0 (implementation) and Gate 0.5G (verification). See multi-tenant.md for the canonical model and compliance criteria. </auto_detect_reason>
REQUIRED SUB-SKILL: Use ring:dev-implementation
Execution Unit: Task (if no subtasks) or Subtask (if task has subtasks)
MUST execute the Before Gate 0 (task start) row from the State Persistence Checkpoints table before sub-steps 2.1–2.3:
- Set
task.status = "in_progress"in state JSON - Update tasks.md Status →
🔄 Doing(per tasks.md Status update rules in that table) - Write state to file
CANNOT proceed to sub-steps 2.1–2.3 without completing this checkpoint.
See shared-patterns/shared-orchestrator-principle.md for full details.
⛔ FORBIDDEN: Executing TDD-RED/GREEN logic directly from this step. MUST invoke the ring:dev-implementation skill via the Skill tool; it handles all TDD phases, agent selection, agent dispatch, standards verification, and fix iteration.
See shared-patterns/file-size-enforcement.md for thresholds, verification commands, split strategies, and agent instructions.
Summary: No source file may exceed 300 lines (>300 = loop back to agent; >500 = hard block). Implementation agents MUST split proactively. Enforcement points:
- Gate 0: Implementation agent receives file-size instructions; orchestrator runs verification command after agent completes and loops back if any file > 300 lines.
- Gate 0.5: Delivery verification skill runs 7 checks: (A) file-size, (B) license headers, (C) linting, (D) migration safety, (E) vulnerability scanning, (F) API backward compatibility, (G) multi-tenant dual-mode. Any FAIL → return to Gate 0 with specific fix instructions.
- Gate 8: Code reviewers MUST flag any file > 300 lines as a MEDIUM+ issue (blocking).
Gather from current execution unit:
implementation_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
requirements: state.current_unit.acceptance_criteria,
// REQUIRED - detected from project
language: state.current_unit.language, // "go" | "typescript" | "python"
service_type: state.current_unit.service_type, // "api" | "worker" | "batch" | "cli" | "frontend" | "bff"
// OPTIONAL - additional context
technical_design: state.current_unit.technical_design || null,
existing_patterns: state.current_unit.existing_patterns || [],
project_rules_path: "docs/PROJECT_RULES.md"
}
1. Record gate start timestamp
2. REQUIRED: Invoke ring:dev-implementation skill with structured input:
Skill("ring:dev-implementation") with input:
unit_id: implementation_input.unit_id
requirements: implementation_input.requirements
language: implementation_input.language
service_type: implementation_input.service_type
technical_design: implementation_input.technical_design
existing_patterns: implementation_input.existing_patterns
project_rules_path: implementation_input.project_rules_path
The skill handles:
- Selecting appropriate agent (Go/TS/Frontend based on language)
- TDD-RED phase (writing failing test, capturing failure output)
- TDD-GREEN phase (implementing code to pass test)
- Standards compliance verification (iteration loop, max 3 attempts)
- Re-dispatching agent for compliance fixes
- Outputting Standards Coverage Table with evidence
3. REQUIRED: Parse skill output for results:
Expected output sections:
- "## Implementation Summary" → status (PASS/FAIL), agent used
- "## TDD Results" → RED/GREEN phase status
- "## Files Changed" → created/modified files list
- "## Handoff to Next Gate" → ready_for_gate_1: YES/NO
if skill output contains "Status: PASS" and "Ready for Gate 1: YES":
→ Gate 0 PASSED. Proceed to Step 2.3.
if skill output contains "Status: FAIL" or "Ready for Gate 1: NO":
→ Gate 0 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran TDD and standards verification
→ If "ESCALATION" in output: STOP and report to user
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. When ring:dev-implementation skill returns PASS:
REQUIRED: Parse from skill output:
- agent_used: extract from "## Implementation Summary"
- tdd_red_status: extract from "## TDD Results" table
- tdd_green_status: extract from "## TDD Results" table
- files_changed: extract from "## Files Changed" table
- standards_compliance: extract from Standards Coverage Table
- agent_outputs.implementation = {
skill: "ring:dev-implementation",
agent: "[agent used by skill]",
output: "[full skill output]",
timestamp: "[ISO timestamp]",
duration_ms: [execution time],
tdd_red: {
status: "completed",
test_file: "[from skill output]",
failure_output: "[from skill output]"
},
tdd_green: {
status: "completed",
implementation_files: "[from skill output]",
pass_output: "[from skill output]"
},
standards_compliance: {
total_sections: [N from skill output],
compliant: [N sections with ✅],
not_applicable: [N sections with N/A],
non_compliant: 0
}
}
6. Display to user:
┌─────────────────────────────────────────────────┐
│ ✓ GATE 0 COMPLETE │
├─────────────────────────────────────────────────┤
│ Skill: ring:dev-implementation │
│ Agent: [agent_used] │
│ TDD-RED: FAIL captured ✓ │
│ TDD-GREEN: PASS verified ✓ │
│ STANDARDS: [N]/[N] sections compliant ✓ │
│ │
│ Proceeding to Gate 1 (DevOps)... │
└─────────────────────────────────────────────────┘
7. MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]
See "State Persistence Rule" section.
8. Proceed to Gate 0.5
1. Load Skill:
Skill("ring:dev-delivery-verification")
2. Invoke with Gate 0 outputs:
- unit_id: current task/subtask ID
- requirements: original task requirements from tasks.md
- files_changed: from Gate 0 agent_outputs.implementation (## Files Changed)
- gate0_handoff: full Gate 0 output
3. Parse result from skill output:
- result: PASS | PARTIAL | FAIL
- requirements_total: from "## Requirement Coverage Matrix"
- requirements_delivered: count of ✅ DELIVERED rows
- requirements_missing: count of ❌ NOT DELIVERED rows
- dead_code_items: count from "## Dead Code Detection"
- remediation_items: count from "## Return to Gate 0" (0 if PASS)
4. Update state:
- gate_progress.delivery_verification = {
status: "completed",
requirements_total: [N],
requirements_delivered: [N],
requirements_missing: [N],
dead_code_items: [N],
remediation_items: [N],
completed_at: "[ISO timestamp]"
}
5. Control flow based on result:
IF PASS:
→ Display ✓ GATE 0.5 COMPLETE, proceed to Gate 1
IF PARTIAL:
→ Extract undelivered requirements from "## Return to Gate 0"
→ Display ⚠ GATE 0.5 PARTIAL — [N] of [M] requirements not delivered
→ Return to Gate 0 (Step 2) with explicit fix instructions:
"Deliver the following requirements: [list from Return to Gate 0]"
→ After Gate 0 re-run, re-execute Gate 0.5
→ Max 2 retries. If still PARTIAL after 2 retries → escalate to user
IF FAIL:
→ Extract all gaps from "## Return to Gate 0"
→ Display ✗ GATE 0.5 FAIL — critical requirements not delivered
→ Return to Gate 0 (Step 2) with full remediation list
→ After Gate 0 re-run, re-execute Gate 0.5
→ Max 2 retries. If still FAIL after 2 retries → escalate to user
6. Display to user:
┌─────────────────────────────────────────────────┐
│ ✓ GATE 0.5 COMPLETE │
├─────────────────────────────────────────────────┤
│ Skill: ring:dev-delivery-verification │
│ Requirements: [delivered]/[total] DELIVERED │
│ Dead Code: [N] items │
│ Verdict: [PASS|PARTIAL|FAIL] │
│ │
│ Proceeding to Gate 1 (DevOps)... │
└─────────────────────────────────────────────────┘
7. MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]
See "State Persistence Rule" section.
8. Proceed to Gate 1
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I can run TDD-RED/GREEN directly from here" | Inline TDD = skipping the skill. Skill has iteration logic and validation. | Invoke Skill("ring:dev-implementation") |
| "I already know which agent to dispatch" | Agent selection is the SKILL's job, not the orchestrator's. | Invoke Skill("ring:dev-implementation") |
| "The TDD steps are documented here, I'll follow them" | These steps are REFERENCE, not EXECUTABLE. The skill is executable. | Invoke Skill("ring:dev-implementation") |
| "Skill adds overhead for simple tasks" | Overhead = compliance checks. Simple ≠ exempt. | Invoke Skill("ring:dev-implementation") |
| "I'll dispatch the agent and verify output myself" | Self-verification skips the skill's re-dispatch loop. | Invoke Skill("ring:dev-implementation") |
| "Agent already did TDD internally" | Internal ≠ verified by skill. Skill validates output structure. | Invoke Skill("ring:dev-implementation") |
REQUIRED SUB-SKILL: Use ring:dev-devops
Gate 1 is a BLOCKING gate. DevOps agent MUST create all required artifacts. If any artifact is missing:
- You CANNOT proceed to Gate 2
- You MUST re-dispatch to ring:devops-engineer to create missing artifacts
- You MUST verify all artifacts exist before proceeding
See shared-patterns/standards-coverage-table.md → "ring:devops-engineer → devops.md" for all required sections.
Key artifacts from devops.md:
- Containers (Dockerfile + Docker Compose)
- Makefile Standards (all required commands)
- Infrastructure as Code (if applicable)
- Helm charts (if K8s deployment)
Gather from previous gates:
devops_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
// REQUIRED - from Gate 0 context
language: state.current_unit.language, // "go" | "typescript" | "python"
service_type: state.current_unit.service_type, // "api" | "worker" | "batch" | "cli"
implementation_files: agent_outputs.implementation.files_changed, // list of files from Gate 0
// OPTIONAL - additional context
gate0_handoff: agent_outputs.implementation, // full Gate 0 output
new_dependencies: state.current_unit.new_deps || [], // new deps added in Gate 0
new_env_vars: state.current_unit.env_vars || [], // env vars needed
new_services: state.current_unit.services || [], // postgres, redis, etc.
existing_dockerfile: [check if Dockerfile exists],
existing_compose: [check if docker-compose.yml exists]
}
1. Record gate start timestamp
2. Invoke ring:dev-devops skill with structured input:
Skill("ring:dev-devops") with input:
unit_id: devops_input.unit_id
language: devops_input.language
service_type: devops_input.service_type
implementation_files: devops_input.implementation_files
gate0_handoff: devops_input.gate0_handoff
new_dependencies: devops_input.new_dependencies
new_env_vars: devops_input.new_env_vars
new_services: devops_input.new_services
existing_dockerfile: devops_input.existing_dockerfile
existing_compose: devops_input.existing_compose
The skill handles:
- Dispatching ring:devops-engineer agent
- Dockerfile creation/update
- docker-compose.yml configuration
- .env.example documentation
- Verification commands execution
- Fix iteration loop (max 3 attempts)
3. Parse skill output for results:
Expected output sections:
- "## DevOps Summary" → status, iterations
- "## Files Changed" → Dockerfile, docker-compose, .env.example actions
- "## Verification Results" → build, startup, health checks
- "## Handoff to Next Gate" → ready_for_sre: YES/no
if skill output contains "Status: PASS" and "Ready for Gate 2: YES":
→ Gate 1 PASSED. Proceed to Step 3.3.
if skill output contains "Status: FAIL" or "Ready for Gate 2: no":
→ Gate 1 BLOCKED.
→ Skill already dispatched fixes to ring:devops-engineer
→ Skill already re-ran verification
→ If "ESCALATION" in output: STOP and report to user
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. When ring:dev-devops skill returns PASS:
Parse from skill output:
- status: extract from "## DevOps Summary"
- dockerfile_action: extract from "## Files Changed" table
- compose_action: extract from "## Files Changed" table
- verification_passed: extract from "## Verification Results"
- agent_outputs.devops = {
skill: "ring:dev-devops",
output: "[full skill output]",
artifacts_created: ["Dockerfile", "docker-compose.yml", ".env.example"],
verification_passed: true,
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
6. Update state:
- gate_progress.devops.status = "completed"
- gate_progress.devops.artifacts = [list from skill output]
7. Proceed to Gate 2
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Dockerfile exists, skip other artifacts" | all artifacts required. 1/4 ≠ complete. | Create all artifacts |
| "docker-compose not needed locally" | docker-compose is MANDATORY for local dev. | Create docker-compose.yml |
| "Makefile is optional" | Makefile is MANDATORY for standardized commands. | Create Makefile |
| ".env.example can be added later" | .env.example documents required config NOW. | Create .env.example |
| "Small service doesn't need all this" | Size is irrelevant. Standards apply uniformly. | Create all artifacts |
REQUIRED SUB-SKILL: Use ring:dev-sre
Gather from previous gates:
sre_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
// REQUIRED - from Gate 0 context
language: state.current_unit.language, // "go" | "typescript" | "python"
service_type: state.current_unit.service_type, // "api" | "worker" | "batch" | "cli"
implementation_agent: agent_outputs.implementation.agent, // e.g., "ring:backend-engineer-golang"
implementation_files: agent_outputs.implementation.files_changed, // list of files from Gate 0
// OPTIONAL - additional context
external_dependencies: state.current_unit.external_deps || state.detected_dependencies || [], // HTTP clients, gRPC, queues
gate0_handoff: agent_outputs.implementation, // full Gate 0 output
gate1_handoff: agent_outputs.devops // full Gate 1 output
}
1. Record gate start timestamp
2. Invoke ring:dev-sre skill with structured input:
Skill("ring:dev-sre") with input:
unit_id: sre_input.unit_id
language: sre_input.language
service_type: sre_input.service_type
implementation_agent: sre_input.implementation_agent
implementation_files: sre_input.implementation_files
external_dependencies: sre_input.external_dependencies
gate0_handoff: sre_input.gate0_handoff
gate1_handoff: sre_input.gate1_handoff
The skill handles:
- Dispatching SRE agent for validation
- Structured logging validation
- Distributed tracing validation
- Code instrumentation coverage (90%+ required)
- Context propagation validation (InjectHTTPContext/InjectGRPCContext)
- Dispatching fixes to implementation agent if needed
- Re-validation loop (max 3 iterations)
3. Parse skill output for validation results:
Expected output sections:
- "## Validation Result" → status, iterations, coverage
- "## Instrumentation Coverage" → table with per-layer coverage
- "## Issues Found" → list or "None"
- "## Handoff to Next Gate" → ready_for_testing: YES/no
if skill output contains "Status: PASS" and "Ready for Gate 3: YES":
→ Gate 2 PASSED. Proceed to Step 4.3.
if skill output contains "Status: FAIL" or "Ready for Gate 3: no":
→ Gate 2 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran validation
→ If "ESCALATION" in output: STOP and report to user
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. When ring:dev-sre skill returns PASS:
Parse from skill output:
- status: extract from "## Validation Result"
- instrumentation_coverage: extract percentage from coverage table
- iterations: extract from "Iterations:" line
- agent_outputs.sre = {
skill: "ring:dev-sre",
output: "[full skill output]",
validation_result: "PASS",
instrumentation_coverage: "[X%]",
iterations: [count],
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
6. Update state:
- gate_progress.sre.status = "completed"
- gate_progress.sre.observability_validated = true
- gate_progress.sre.instrumentation_coverage = "[X%]"
7. Proceed to Gate 3
See ring:dev-sre/SKILL.md for complete anti-rationalization tables covering:
- Observability deferral rationalizations
- Instrumentation coverage rationalizations
- Context propagation rationalizations
| User Says | Your Response |
|---|---|
| "Skip SRE validation, we'll add observability later" | "Observability is MANDATORY for Gate 2. Invoking ring:dev-sre skill now." |
| "SRE found issues but let's continue" | "Gate 2 is a HARD GATE. ring:dev-sre skill handles fix dispatch and re-validation." |
| "Instrumentation coverage is low but code works" | "90%+ instrumentation coverage is REQUIRED. ring:dev-sre skill will not pass until met." |
REQUIRED SUB-SKILL: Use ring:dev-unit-testing
Gather from previous gates:
testing_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
acceptance_criteria: state.current_unit.acceptance_criteria, // list of ACs to test
implementation_files: agent_outputs.implementation.files_changed,
language: state.current_unit.language, // "go" | "typescript" | "python"
// OPTIONAL - additional context
coverage_threshold: 85, // Ring minimum, PROJECT_RULES.md can raise
gate0_handoff: agent_outputs.implementation, // full Gate 0 output
existing_tests: [check for existing test files]
}
1. Record gate start timestamp
2. Invoke ring:dev-unit-testing skill with structured input:
Skill("ring:dev-unit-testing") with input:
unit_id: testing_input.unit_id
acceptance_criteria: testing_input.acceptance_criteria
implementation_files: testing_input.implementation_files
language: testing_input.language
coverage_threshold: testing_input.coverage_threshold
gate0_handoff: testing_input.gate0_handoff
existing_tests: testing_input.existing_tests
The skill handles:
- Dispatching ring:qa-analyst agent
- Test creation following TDD methodology
- Coverage measurement and validation (85%+ required)
- Traceability matrix (AC → Test mapping)
- Dispatching fixes to implementation agent if coverage < threshold
- Re-validation loop (max 3 iterations)
3. Parse skill output for results:
Expected output sections:
- "## Testing Summary" → status, iterations
- "## Coverage Report" → threshold vs actual
- "## Traceability Matrix" → AC-to-test mapping
- "## Handoff to Next Gate" → ready_for_review: YES/no
if skill output contains "Status: PASS" and "Ready for Next Gate: YES":
→ Gate 3 PASSED. Proceed to Step 5.3.
if skill output contains "Status: FAIL" or "Ready for Next Gate: NO":
→ Gate 3 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran coverage check
→ If "ESCALATION" in output: STOP and report to user
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. When ring:dev-unit-testing skill returns PASS:
Parse from skill output:
- coverage_actual: extract percentage from "## Coverage Report"
- coverage_threshold: extract from "## Coverage Report"
- criteria_covered: extract from "## Traceability Matrix"
- iterations: extract from "Iterations:" line
- agent_outputs.testing = {
skill: "ring:dev-unit-testing",
output: "[full skill output]",
verdict: "PASS",
coverage_actual: [X%],
coverage_threshold: [85%],
criteria_covered: "[X/Y]",
iterations: [count],
timestamp: "[ISO timestamp]",
duration_ms: [execution time],
failures: [], // Empty when PASS; see schema below for FAIL
uncovered_criteria: [] // Empty when all ACs covered
}
**If iterations > 1 (tests failed before passing), populate `failures[]`:**
```json
failures: [
{
"test_name": "TestUserCreate_InvalidEmail",
"test_file": "internal/handler/user_test.go",
"error_type": "assertion|panic|timeout|compilation",
"expected": "[expected value]",
"actual": "[actual value]",
"message": "[error message from test output]",
"stack_trace": "[relevant stack trace]",
"fixed_in_iteration": [iteration number when fixed]
}
]
If coverage < 100% of acceptance criteria, populate uncovered_criteria[]:
uncovered_criteria: [
{
"criterion_id": "AC-001",
"description": "User should receive email confirmation",
"reason": "No test found for email sending functionality"
}
]-
Update state:
- gate_progress.testing.status = "completed"
- gate_progress.testing.coverage = [coverage_actual]
-
Proceed to Gate 4 (Fuzz Testing)
### Gate 3 Thresholds
- **Minimum:** 85% (Ring standard - CANNOT be lowered)
- **Project-specific:** Can be higher if defined in `docs/PROJECT_RULES.md`
- **Validation:** Threshold < 85% → Use 85%
### Gate 3 Pressure Resistance
| User Says | Your Response |
|-----------|---------------|
| "84% is close enough" | "85% is Ring minimum. ring:dev-unit-testing skill enforces this." |
| "Skip testing, deadline" | "Testing is MANDATORY. ring:dev-unit-testing skill handles iterations." |
| "Manual testing covers it" | "Gate 3 requires executable unit tests. Invoking ring:dev-unit-testing now." |
## Step 6: Gate 4 - Fuzz Testing (Per Execution Unit)
**REQUIRED SUB-SKILL:** Use `ring:dev-fuzz-testing`
**MANDATORY GATE:** All code paths MUST have fuzz tests to discover edge cases and crashes.
### Step 6.1: Prepare Input for ring:dev-fuzz-testing Skill
```text
Gather from previous gates:
fuzz_testing_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
implementation_files: agent_outputs.implementation.files_changed,
language: state.current_unit.language, // "go" | "typescript"
// OPTIONAL - additional context
gate3_handoff: agent_outputs.unit_testing // full Gate 3 output
}
1. Record gate start timestamp
2. Invoke ring:dev-fuzz-testing skill with structured input:
Skill("ring:dev-fuzz-testing") with input:
unit_id: fuzz_testing_input.unit_id
implementation_files: fuzz_testing_input.implementation_files
language: fuzz_testing_input.language
gate3_handoff: fuzz_testing_input.gate3_handoff
The skill handles:
- Dispatching ring:qa-analyst agent (test_mode: fuzz)
- Fuzz function creation (FuzzXxx naming)
- Seed corpus generation (minimum 5 entries)
- f.Add() pattern validation
- Dispatching fixes if crashes found
- Re-validation loop (max 3 iterations)
3. Parse skill output for results:
if skill output contains "Status: PASS":
→ Gate 4 PASSED. Proceed to Gate 5.
if skill output contains "Status: FAIL":
→ Gate 4 BLOCKED.
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. Update state:
- gate_progress.fuzz_testing.status = "completed"
- gate_progress.fuzz_testing.corpus_entries = [count]
6. Proceed to Gate 5 (Property Testing)
REQUIRED SUB-SKILL: Use ring:dev-property-testing
MANDATORY GATE: Domain invariants MUST be verified with property-based tests.
Gather from previous gates:
property_testing_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
implementation_files: agent_outputs.implementation.files_changed,
language: state.current_unit.language,
// Domain invariants from requirements
domain_invariants: state.current_unit.domain_invariants || []
}
1. Record gate start timestamp
2. Invoke ring:dev-property-testing skill with structured input:
Skill("ring:dev-property-testing") with input:
unit_id: property_testing_input.unit_id
implementation_files: property_testing_input.implementation_files
language: property_testing_input.language
domain_invariants: property_testing_input.domain_invariants
The skill handles:
- Dispatching ring:qa-analyst agent (test_mode: property)
- Property function creation (TestProperty_* naming)
- quick.Check pattern validation
- Invariant verification
- Dispatching fixes if properties fail
- Re-validation loop (max 3 iterations)
3. Parse skill output for results:
if skill output contains "Status: PASS":
→ Gate 5 PASSED. Proceed to Gate 6.
if skill output contains "Status: FAIL":
→ Gate 5 BLOCKED.
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. Update state:
- gate_progress.property_testing.status = "completed"
- gate_progress.property_testing.properties_tested = [count]
6. Proceed to Gate 6 (Integration Testing)
REQUIRED SUB-SKILL: Use ring:dev-integration-testing
MANDATORY GATE: All code MUST have integration tests using testcontainers.
⛔ DEFERRED EXECUTION: Per unit, this gate writes/updates integration test code and verifies compilation. Tests are NOT executed here (no containers). Actual execution happens at end of cycle (Step 12.1).
Gather from previous gates:
integration_testing_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
integration_scenarios: state.current_unit.integration_scenarios || [],
external_dependencies: state.current_unit.external_dependencies || state.detected_dependencies || [],
language: state.current_unit.language,
mode: "write_only", // CRITICAL: write tests, verify compilation, do NOT execute
// OPTIONAL - additional context
gate5_handoff: agent_outputs.property_testing,
implementation_files: agent_outputs.implementation.files_changed
}
// NOTE: external_dependencies falls back to state.detected_dependencies
// from Step 1.5 (cycle-level auto-detection) when the unit doesn't define them.
1. Record gate start timestamp
2. REQUIRED: Invoke ring:dev-integration-testing skill with structured input:
Skill("ring:dev-integration-testing") with input:
unit_id: integration_testing_input.unit_id
integration_scenarios: integration_testing_input.integration_scenarios
external_dependencies: integration_testing_input.external_dependencies
language: integration_testing_input.language
mode: "write_only"
gate5_handoff: integration_testing_input.gate5_handoff
implementation_files: integration_testing_input.implementation_files
In write_only mode, the skill handles:
- Dispatching ring:qa-analyst agent (test_mode: integration)
- Writing/updating integration test code for current unit's changes
- Verifying test compilation (go build ./... or tsc --noEmit)
- Verifying build tags (//go:build integration) present
- Verifying testcontainers imports present
- NOT spinning up containers or executing tests
3. REQUIRED: Parse skill output for results:
Expected output:
- "## Integration Test Code" → files written/updated
- "## Compilation Check" → PASS/FAIL
- "## Standards Compliance" → build tags, naming, testcontainers
if compilation PASS and standards met:
→ Gate 6 (write) PASSED. Proceed to Step 8.3.
if compilation FAIL:
→ Gate 6 BLOCKED. Fix compilation errors before proceeding.
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. Update state:
- gate_progress.integration_testing.write_status = "completed"
- gate_progress.integration_testing.execution_status = "deferred" // Executed at end of cycle
- gate_progress.integration_testing.test_files = [list of test files written/updated]
- gate_progress.integration_testing.compilation_passed = true
6. Proceed to Gate 7 (Chaos Testing — Write Only)
| User Says | Your Response |
|---|---|
| "Unit tests cover integration" | "Unit tests mock dependencies. Integration tests verify real behavior. Write the tests now, execute at end of cycle." |
| "Skip writing, we'll add tests later" | "Test code MUST be written per unit to stay current. Only execution is deferred." |
| "No external dependencies to test" | "Verify internal integration too. Write the tests, they'll execute at end of cycle." |
| "Just run the tests now" | "Deferred execution avoids redundant container spin-ups. Tests execute once at end of cycle." |
REQUIRED SUB-SKILL: Use ring:dev-chaos-testing
MANDATORY GATE: All external dependencies MUST have chaos tests for failure scenarios.
⛔ DEFERRED EXECUTION: Per unit, this gate writes/updates chaos test code and verifies compilation. Tests are NOT executed here (no Toxiproxy). Actual execution happens at end of cycle (Step 12.1).
Gather from previous gates:
chaos_testing_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
external_dependencies: state.current_unit.external_dependencies || state.detected_dependencies || [],
language: state.current_unit.language,
mode: "write_only", // CRITICAL: write tests, verify compilation, do NOT execute
// OPTIONAL - additional context
gate6_handoff: agent_outputs.integration_testing
}
// NOTE: external_dependencies falls back to state.detected_dependencies
// from Step 1.5 (cycle-level auto-detection) when the unit doesn't define them.
1. Record gate start timestamp
2. REQUIRED: Invoke ring:dev-chaos-testing skill with structured input:
Skill("ring:dev-chaos-testing") with input:
unit_id: chaos_testing_input.unit_id
external_dependencies: chaos_testing_input.external_dependencies
language: chaos_testing_input.language
mode: "write_only"
gate6_handoff: chaos_testing_input.gate6_handoff
In write_only mode, the skill handles:
- Dispatching ring:qa-analyst agent (test_mode: chaos)
- Writing/updating chaos test code for current unit's dependencies
- Verifying test compilation
- Verifying dual-gate pattern (CHAOS=1 + testing.Short())
- Verifying Toxiproxy imports present
- NOT starting Toxiproxy or executing failure scenarios
3. Parse skill output for results:
if compilation PASS and standards met:
→ Gate 7 (write) PASSED. Proceed to Gate 8.
if compilation FAIL:
→ Gate 7 BLOCKED. Fix compilation errors before proceeding.
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. Update state:
- gate_progress.chaos_testing.write_status = "completed"
- gate_progress.chaos_testing.execution_status = "deferred" // Executed at end of cycle
- gate_progress.chaos_testing.test_files = [list of test files written/updated]
- gate_progress.chaos_testing.compilation_passed = true
6. Proceed to Gate 8 (Review)
| User Says | Your Response |
|---|---|
| "Chaos testing is overkill" | "Chaos tests verify graceful degradation. Write them now, execute at end of cycle." |
| "Skip writing, add later" | "Test code MUST be written per unit. Only execution is deferred to end of cycle." |
| "Just run the chaos tests now" | "Deferred execution avoids redundant Toxiproxy spin-ups. Tests execute once at end of cycle." |
| "No time for chaos testing" | "Writing chaos tests per unit takes minutes. Execution cost is paid once at end." |
REQUIRED SUB-SKILL: Use ring:requesting-code-review
Gather from previous gates:
review_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
base_sha: state.current_unit.base_sha, // SHA before implementation
head_sha: [current HEAD], // SHA after all gates
implementation_summary: state.current_unit.title + requirements,
requirements: state.current_unit.acceptance_criteria,
// OPTIONAL - additional context
implementation_files: agent_outputs.implementation.files_changed,
gate0_handoff: agent_outputs.implementation // full Gate 0 output
}
1. Record gate start timestamp
2. Invoke ring:requesting-code-review skill with structured input:
Skill("ring:requesting-code-review") with input:
unit_id: review_input.unit_id
base_sha: review_input.base_sha
head_sha: review_input.head_sha
implementation_summary: review_input.implementation_summary
requirements: review_input.requirements
implementation_files: review_input.implementation_files
gate0_handoff: review_input.gate0_handoff
The skill handles:
- Dispatching all 7 reviewers in PARALLEL (single message with 7 Task calls)
- ring:code-reviewer, ring:business-logic-reviewer, ring:security-reviewer, ring:nil-safety-reviewer, ring:test-reviewer, ring:consequences-reviewer, ring:dead-code-reviewer
- Aggregating issues by severity (CRITICAL/HIGH/MEDIUM/LOW/COSMETIC)
- Dispatching fixes to implementation agent for blocking issues
- Re-running all 7 reviewers after fixes
- Iteration tracking (max 3 attempts)
- Adding TODO/FIXME comments for non-blocking issues
3. Parse skill output for results:
Expected output sections:
- "## Review Summary" → status, iterations
- "## Issues by Severity" → counts per severity level
- "## Reviewer Verdicts" → all 7 reviewers
- "## Handoff to Next Gate" → ready_for_validation: YES/NO
if skill output contains "Status: PASS" and "Ready for Gate 9: YES":
→ Gate 8 PASSED. Proceed to Step 10.3.
if skill output contains "Status: FAIL" or "Ready for Gate 9: NO":
→ Gate 8 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran all 7 reviewers
→ If "ESCALATION" in output: STOP and report to user
4. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
5. When ring:requesting-code-review skill returns PASS:
Parse from skill output:
- reviewers_passed: extract from "## Reviewer Verdicts" (should be "5/5")
- issues_critical: extract count from "## Issues by Severity"
- issues_high: extract count from "## Issues by Severity"
- issues_medium: extract count from "## Issues by Severity"
- iterations: extract from "Iterations:" line
- agent_outputs.review = {
skill: "ring:requesting-code-review",
output: "[full skill output]",
iterations: [count],
timestamp: "[ISO timestamp]",
duration_ms: [execution time],
reviewers_passed: "5/5",
code_reviewer: {
verdict: "PASS",
issues_count: N,
issues: [] // Structured issues - see schema below
},
business_logic_reviewer: {
verdict: "PASS",
issues_count: N,
issues: []
},
security_reviewer: {
verdict: "PASS",
issues_count: N,
issues: []
},
nil_safety_reviewer: {
verdict: "PASS",
issues_count: N,
issues: []
},
test_reviewer: {
verdict: "PASS",
issues_count: N,
issues: []
}
}
**Populate `issues[]` for each reviewer with all issues found (even if fixed):**
```json
issues: [
{
"severity": "CRITICAL|HIGH|MEDIUM|LOW|COSMETIC",
"category": "error-handling|security|performance|maintainability|business-logic|...",
"description": "[detailed description of the issue]",
"file": "internal/handler/user.go",
"line": 45,
"code_snippet": "return err",
"suggestion": "Use fmt.Errorf(\"failed to create user: %w\", err)",
"fixed": true|false,
"fixed_in_iteration": [iteration number when fixed, null if not fixed]
}
]
Issue tracking rules:
- all issues found across all iterations MUST be recorded
fixed: true+fixed_in_iteration: Nfor issues resolved during reviewfixed: false+fixed_in_iteration: nullfor LOW/COSMETIC (TODO/FIXME added)- This enables feedback-loop to analyze recurring issue patterns
-
Update state:
- gate_progress.review.status = "completed"
- gate_progress.review.reviewers_passed = "5/5"
-
Proceed to Gate 9
### Gate 8 Anti-Rationalization Table
| Rationalization | Why It's WRONG | Required Action |
|-----------------|----------------|-----------------|
| "Only 1 MEDIUM issue, can proceed" | MEDIUM = MUST FIX. Quantity is irrelevant. | **Fix the issue, re-run all 7 reviewers** |
| "Issue is cosmetic, not really MEDIUM" | Reviewer decided severity. Accept their judgment. | **Fix the issue, re-run all 7 reviewers** |
| "Will fix in next sprint" | Deferred fixes = technical debt = production bugs. | **Fix NOW before Gate 9** |
| "User approved, can skip fix" | User approval ≠ reviewer override. Fixes are mandatory. | **Fix the issue, re-run all 7 reviewers** |
| "Same issue keeps appearing, skip it" | Recurring issue = fix is wrong. Debug properly. | **Root cause analysis, then fix** |
| "Only one reviewer found it" | One reviewer = valid finding. All findings matter. | **Fix the issue, re-run all 7 reviewers** |
| "Iteration limit reached, just proceed" | Limit = escalate, not bypass. Quality is non-negotiable. | **Escalate to user, DO NOT proceed** |
| "Tests pass, review issues don't matter" | Tests ≠ review. Different quality dimensions. | **Fix the issue, re-run all 7 reviewers** |
### Gate 8 Pressure Resistance
| User Says | Your Response |
|-----------|---------------|
| "Just skip this MEDIUM issue" | "MEDIUM severity issues are blocking by definition. I MUST dispatch a fix to the appropriate agent before proceeding. This protects code quality." |
| "I'll fix it later, let's continue" | "Gate 8 is a HARD GATE. All CRITICAL/HIGH/MEDIUM issues must be resolved NOW. I'm dispatching the fix to [agent] and will re-run all 7 reviewers after." |
| "We're running out of time" | "Proceeding with known issues creates larger problems later. The fix dispatch is automated and typically takes 2-5 minutes. Quality gates exist to save time overall." |
| "Override the gate, I approve" | "User approval cannot override reviewer findings. The gate ensures code quality. I'll dispatch the fix now." |
| "It's just a style issue" | "If it's truly cosmetic, reviewers would mark it COSMETIC (non-blocking). MEDIUM means it affects maintainability or correctness. Fixing now." |
---
## Step 11: Gate 9 - Validation (Per Execution Unit)
```text
For current execution unit:
1. Record gate start timestamp
2. Verify acceptance criteria:
For each criterion in acceptance_criteria:
- Check if implemented
- Check if tested
- Mark as PASS/FAIL
3. Run final verification:
- All tests pass?
- No Critical/High/Medium review issues?
- All acceptance criteria met?
4. If validation fails:
- Log failure reasons
- Determine which gate to revisit
- Loop back to appropriate gate
5. If validation passes:
- Set unit status = "completed"
- Record gate end timestamp
- agent_outputs.validation = {
result: "approved",
timestamp: "[ISO timestamp]",
criteria_results: [{criterion, status}]
}
- Proceed to Step 11.1 (Execution Unit Approval)
Checkpoint depends on execution_mode: manual_per_subtask → Execute | manual_per_task / automatic → Skip
- COMMIT CHECK (before checkpoint):
- if
commit_timing == "per_subtask":- Execute
/ring:commitcommand with message:feat({unit_id}): {unit_title} - Include all changed files from this subtask
- Execute
- else: Skip commit (will happen at task or cycle end)
- if
0b. VISUAL CHANGE REPORT (MANDATORY - before checkpoint):
- MANDATORY: Invoke
Skill("ring:visual-explainer")to generate a code-diff HTML report for this execution unit - Read
default/skills/visual-explainer/templates/code-diff.htmlto absorb the patterns before generating - Content sourced from state JSON
agent_outputsfor the current unit:- TDD Output:
tdd_red(failing test with failure_output) +tdd_green(implementation with pass_output) - Files Changed: Per-file before/after using
git diffdata from the implementation (for new files, show "New File" in the before panel). Do not read source files directly — use diff output provided by the implementation agent. - Review Verdicts: Summary of all 7 reviewer verdicts from Gate 8
- Acceptance Criteria: Status from Gate 9 validation
- TDD Output:
- HTML includes: KPI cards (files changed, tests added, review iterations, gate pass/fail summary), per-file diff panels, review issues section (if any Medium+ issues were found and fixed)
- Save to:
docs/ring:dev-cycle/reports/unit-{unit_id}-report.html - Open in browser:
macOS: open docs/ring:dev-cycle/reports/unit-{unit_id}-report.html Linux: xdg-open docs/ring:dev-cycle/reports/unit-{unit_id}-report.html - Tell the user the file path
- See shared-patterns/anti-rationalization-visual-report.md for anti-rationalization table
- Set
status = "paused_for_approval", save state - Present summary: Unit ID, Parent Task, Gates 0-9 status, Criteria X/X, Duration, Files Changed, Commit Status
- AskUserQuestion: "Ready to proceed?" Options: (a) Continue (b) Test First (c) Stop Here
- Handle response:
| Response | Action |
|---|---|
| Continue | Set in_progress, move to next unit (or Step 11.2 if last) |
| Test First | Set paused_for_testing, STOP, output resume command |
| Stop Here | Set paused, STOP, output resume command |
Checkpoint depends on execution_mode: manual_per_subtask / manual_per_task → Execute | automatic → Skip
- COMMIT CHECK (before task checkpoint):
- if
commit_timing == "per_task":- Execute
/ring:commitcommand with message:feat({task_id}): {task_title} - Include all changed files from this task (all subtasks combined)
- Execute
- else if
commit_timing == "per_subtask": Already committed per subtask - else: Skip commit (will happen at cycle end)
- if
0b. VISUAL CHANGE REPORT (MANDATORY - before task checkpoint):
- MANDATORY: Invoke
Skill("ring:visual-explainer")to generate an aggregate code-diff HTML report for all subtasks in this task - Read
default/skills/visual-explainer/templates/code-diff.htmlto absorb the patterns before generating - Content aggregated from all subtask executions:
- Task Overview: Task ID, title, all subtask IDs and their gate statuses
- Combined File Changes: All files modified across all subtasks with before/after diff panels
- Aggregate Metrics: Total tests added, total review iterations, total lines changed
- Save to:
docs/ring:dev-cycle/reports/task-{task_id}-report.html - Open in browser:
macOS: open docs/ring:dev-cycle/reports/task-{task_id}-report.html Linux: xdg-open docs/ring:dev-cycle/reports/task-{task_id}-report.html - Tell the user the file path
- See shared-patterns/anti-rationalization-visual-report.md for anti-rationalization table
-
Set task
status = "completed", cyclestatus = "paused_for_task_approval", save state, and update tasks.md Status →✅ Done(per Step 11.2 row in State Persistence Checkpoints table) -
Present summary: Task ID, Subtasks X/X, Total Duration, Review Iterations, Files Changed, Commit Status
-
AskUserQuestion: "Task complete. Ready for next?" Options: (a) Continue (b) Integration Test (c) Stop Here
-
Handle response:
After completing all subtasks of a task:
0. Check execution_mode from state:
- If "automatic": Still run feedback, then skip to next task
- If "manual_per_subtask" or "manual_per_task": Continue with checkpoint below
1. Set task status = "completed"
2. **⛔ MANDATORY: Run ring:dev-feedback-loop skill**
```yaml
Skill tool:
skill: "ring:dev-feedback-loop"
Note: ring:dev-feedback-loop manages its own TodoWrite tracking internally.
The skill will:
- Add its own todo item for tracking
- Calculate assertiveness score for the task
- Dispatch prompt-quality-reviewer agent with agent_outputs from state
- Generate improvement suggestions
- Write feedback to docs/feedbacks/cycle-{date}/{agent}.md
- Mark its todo as completed
After feedback-loop completes, update state:
- Set
tasks[current].feedback_loop_completed = truein state file
Anti-Rationalization for Feedback Loop:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Task was simple, skip feedback" | Simple tasks still contribute to patterns | Execute Skill tool |
| "Already at 100% score" | High scores need tracking for replication | Execute Skill tool |
| "User approved, feedback unnecessary" | Approval ≠ process quality metrics | Execute Skill tool |
| "No issues found, nothing to report" | Absence of issues IS data | Execute Skill tool |
| "Time pressure, skip metrics" | Metrics take <2 min, prevent future issues | Execute Skill tool |
⛔ HARD GATE: You CANNOT proceed to step 3 without executing the Skill tool above.
-
Set cycle status = "paused_for_task_approval"
-
Save state
-
Present task completion summary (with feedback metrics): ┌─────────────────────────────────────────────────┐ │ ✓ TASK COMPLETED │ ├─────────────────────────────────────────────────┤ │ Task: [task_id] - [task_title] │ │ │ │ Subtasks Completed: X/X │ │ ✓ ST-001-01: [title] │ │ ✓ ST-001-02: [title] │ │ ✓ ST-001-03: [title] │ │ │ │ Total Duration: Xh Xm │ │ Total Review Iterations: N │ │ │ │ ═══════════════════════════════════════════════ │ │ FEEDBACK METRICS │ │ ═══════════════════════════════════════════════ │ │ │ │ Assertiveness Score: XX% (Rating) │ │ │ │ Prompt Quality by Agent: │ │ ring:backend-engineer-golang: 90% (Excellent) │ │ ring:qa-analyst: 75% (Acceptable) │ │ ring:code-reviewer: 88% (Good) │ │ │ │ Improvements Suggested: N │ │ Feedback Location: │ │ docs/feedbacks/cycle-YYYY-MM-DD/ │ │ │ │ ═══════════════════════════════════════════════ │ │ │ │ All Files Changed This Task: │ │ - file1.go │ │ - file2.go │ │ - ... │ │ │ │ Next Task: [next_task_id] - [next_task_title] │ │ Subtasks: N (or "TDD autonomous") │ │ or "No more tasks - cycle complete" │ └─────────────────────────────────────────────────┘
-
ASK FOR EXPLICIT APPROVAL using AskUserQuestion tool:
Question: "Task [task_id] complete. Ready to start the next task?" Options: a) "Continue" - Proceed to next task b) "Integration Test" - User wants to test the full task integration c) "Stop Here" - Pause cycle
-
Handle user response:
If "Continue":
- Set status = "in_progress"
- Move to next task
- Set current_task_index += 1
- Set current_subtask_index = 0
- Reset to Gate 0
- Continue execution
If "Integration Test":
- Set status = "paused_for_integration_testing"
- Save state
- Output: "Cycle paused for integration testing. Test task [task_id] integration and run: /ring:dev-cycle --resume when ready to continue."
- STOP execution
If "Stop Here":
- Set status = "paused"
- Save state
- Output: "Cycle paused after task [task_id]. Resume with: /ring:dev-cycle --resume"
- STOP execution
**Note:** Tasks without subtasks execute both 7.1 and 7.2 in sequence.
## Step 12: Cycle Completion
### Step 12.0: Deferred Test Execution (Gates 6-7)
**⛔ MANDATORY: Execute integration and chaos tests before final commit.**
All units have written/updated test code during their Gate 6-7 passes. Now execute all tests once.
```text
1. Record deferred execution start timestamp
2. REQUIRED: Invoke ring:dev-integration-testing skill in EXECUTE mode:
Skill("ring:dev-integration-testing") with input:
mode: "execute"
all_test_files: [aggregate gate_progress.integration_testing.test_files from all units]
language: state.language
The skill handles:
- Spinning up testcontainers for all external dependencies
- Running ALL integration tests across all units
- Reporting pass/fail per test file
- If failures: dispatching fixes and re-running (max 3 iterations)
3. REQUIRED: Invoke ring:dev-chaos-testing skill in EXECUTE mode:
Skill("ring:dev-chaos-testing") with input:
mode: "execute"
all_test_files: [aggregate gate_progress.chaos_testing.test_files from all units]
language: state.language
The skill handles:
- Starting Toxiproxy
- Running ALL chaos tests across all units
- Verifying recovery for all failure scenarios
- If failures: dispatching fixes and re-running (max 3 iterations)
4. Update state:
- gate_progress.integration_testing.execution_status = "completed" (or "failed")
- gate_progress.chaos_testing.execution_status = "completed" (or "failed")
5. if any test FAILS after 3 iterations:
→ HARD BLOCK. Cannot complete cycle.
→ Report failures to user.
6. **MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]**
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "All unit/fuzz/property tests passed, skip integration" | Different test types catch different bugs. All are MANDATORY. | Execute deferred tests |
| "Tests were written, that's enough" | Written ≠ passing. Execution verifies real behavior. | Execute deferred tests |
| "Containers are slow, let CI handle it" | CI is backup, not replacement. Verify locally first. | Execute deferred tests |
| "One test failed but it's flaky" | Flaky = unreliable = fix it. No exceptions. | Fix and re-run |
Multi-tenant dual-mode is now implemented during Gate 0 and verified at Gate 0.5G. This post-cycle step is a final sanity check only — it does NOT implement or adapt any code.
1. Verify Gate 0.5G passed for ALL units:
for each task in state.tasks:
for each unit in task.units:
if unit.gate_progress.delivery_verification.mt_dualmode != "PASS":
→ HARD BLOCK: "Unit [unit_id] failed MT dual-mode verification at Gate 0.5G"
→ This should never happen (Gate 0.5G blocks progression)
2. Display to user:
┌─────────────────────────────────────────────────┐
│ ✓ MULTI-TENANT DUAL-MODE VERIFIED │
├─────────────────────────────────────────────────┤
│ Mode: Dual-mode (implemented at Gate 0) │
│ Verification: Gate 0.5G PASS for all units │
│ Resources Covered: [PG/Mongo/Redis/RMQ/S3] │
│ Backward Compat: Resolvers handle single-tenant │
└─────────────────────────────────────────────────┘
3. MANDATORY: ⛔ Save state to file — Write tool → [state.state_path]
Note: The full ring:dev-multi-tenant skill (12 gates) is now used ONLY for migrating legacy codebases that were written before dual-mode was standard. For new development via dev-cycle, all multi-tenant compliance is handled by Gate 0 (implementation) + Gate 0.5G (verification).
-
FINAL COMMIT CHECK (before completion):
- if
commit_timing == "at_end":- Execute
/ring:commitcommand with message:feat({cycle_id}): complete dev cycle for {feature_name} - Include all changed files from the entire cycle
- Execute
- else: Already committed per subtask or per task
- if
-
Calculate metrics: total_duration_ms, average gate durations, review iterations, pass/fail ratio
-
Update state:
status = "completed",completed_at = timestamp -
Generate report: Task | Subtasks | Duration | Review Iterations | Status | Commit Status
-
⛔ MANDATORY: Run ring:dev-feedback-loop skill for cycle metrics
Skill tool: skill: "ring:dev-feedback-loop"
Note: ring:dev-feedback-loop manages its own TodoWrite tracking internally.
After feedback-loop completes, update state:
- Set
feedback_loop_completed = trueat cycle level in state file
⛔ HARD GATE: Cycle incomplete until feedback-loop executes.
Rationalization Why It's WRONG Required Action "Cycle done, feedback is extra" Feedback IS part of cycle completion Execute Skill tool "Will run feedback next session" Next session = never. Run NOW. Execute Skill tool "All tasks passed, no insights" Pass patterns need documentation too Execute Skill tool - Set
-
Report: "Cycle completed. Tasks X/X, Subtasks Y, Time Xh Xm, Review iterations X"
# Full PM workflow then dev execution
/ring:pre-dev-full my-feature
/ring:dev-cycle docs/pre-dev/my-feature/
# Simple PM workflow then dev execution
/ring:pre-dev-feature my-feature
/ring:dev-cycle docs/pre-dev/my-feature/tasks.md
# Manual task file
/ring:dev-cycle docs/tasks/sprint-001.md
# Resume interrupted cycle
/ring:dev-cycle --resume| Type | Condition | Action |
|---|---|---|
| Recoverable | Network timeout | Retry with exponential backoff |
| Recoverable | Agent failure | Retry once, then pause for user |
| Recoverable | Test flakiness | Re-run tests up to 2 times |
| Non-Recoverable | Missing required files | Stop and report |
| Non-Recoverable | Invalid state file | Must restart (cannot resume) |
| Non-Recoverable | Max review iterations | Pause for user |
On any error: Update state → Set status (failed/paused) → Save immediately → Report (what failed, why, how to recover, resume command)
Base metrics per shared-patterns/output-execution-report.md.
| Metric | Value |
|---|---|
| Duration | Xh Xm Ys |
| Tasks Processed | N/M |
| Current Gate | Gate X - [name] |
| Review Iterations | N |
| Result | PASS/FAIL/IN_PROGRESS |
| Gate | Duration | Status |
|---|---|---|
| Implementation | Xm Ys | in_progress |
| DevOps | - | pending |
| SRE | - | pending |
| Testing | - | pending |
| Review | - | pending |
| Validation | - | pending |
docs/ring:dev-cycle/current-cycle.json (feature) or docs/ring:dev-refactor/current-cycle.json (refactor)
When the backend dev cycle completes, it produces a handoff file for the frontend dev cycle (ring:dev-cycle-frontend). This enables the frontend cycle to verify E2E tests exercise the correct API endpoints and use the right type contracts.
Path: docs/ring:dev-cycle/handoff-frontend.json
Generated: Automatically after Gate 9 (Validation) passes for all tasks.
{
"cycle_id": "string",
"generated_at": "ISO-8601",
"endpoints": [
{
"method": "GET|POST|PUT|PATCH|DELETE",
"path": "/api/v1/resource",
"request_schema": "object or null",
"response_schema": "object",
"status_codes": [200, 400, 404, 500],
"auth_required": true
}
],
"types_exported": [
{
"name": "ResourceDTO",
"file": "src/types/resource.ts",
"fields": ["id", "name", "createdAt"]
}
],
"contracts": [
{
"consumer": "frontend",
"provider": "backend",
"endpoint": "/api/v1/resource",
"format": "JSON"
}
]
}| Frontend Gate | Handoff Usage |
|---|---|
| Gate 0 (Implementation) | Import types from types_exported, call endpoints |
| Gate 5 (E2E Testing) | Verify all endpoints are exercised in E2E tests |
| Gate 6 (Performance) | Measure response times against endpoints |
If docs/ring:dev-cycle/handoff-frontend.json does not exist, the frontend cycle proceeds without it. The frontend engineer defines API contracts inline based on PROJECT_RULES.md or user input. This is common for greenfield frontend-only projects.