Execute prompts, build workflows, and manage your resource library — all through MCP tool calls. The server hot-reloads everything automatically, so you never touch files directly.
# Discover prompts (use resources for token efficiency)
ReadMcpResourceTool uri="resource://prompt/"
# Execute a prompt with arguments
prompt_engine(command:"@CAGEERF analysis_report content:'Q4 metrics'")
# Chain two prompts together
prompt_engine(command:"research topic:'AI safety' --> summary")
# Check server status
system_control(action:"status")
# Create a gate (use tools for mutations)
resource_manager(resource_type:"gate", action:"create", id:"my-gate", guidance:"...")
# Switch methodology
resource_manager(resource_type:"methodology", action:"switch", id:"cageerf")That's it. Resources for READ, tools for WRITE. Everything below is details.
| I want to... | Tool | Example |
|---|---|---|
| Run a prompt or chain | prompt_engine |
prompt_engine(command:">>review file:'api.ts'") |
| Create, edit, or delete a resource | resource_manager |
resource_manager(resource_type:"prompt", action:"create", ...) |
| Check status, switch frameworks, view metrics | system_control |
system_control(action:"status") |
MCP Resources provide a read-only, token-efficient alternative to tool-based list/inspect operations. Use resources when you need to:
- Discover available prompts, gates, and methodologies without consuming execution tokens
- Read prompt templates, gate guidance, or methodology configs in a structured format
- Monitor active chain sessions and pipeline metrics for observability
- Recover context after compaction or long tasks via session resources
Content Resources (Prompts, Gates, Methodologies)
| URI Pattern | Returns | Use Case |
|---|---|---|
resource://prompt/ |
All prompts (minimal metadata) | Discovery - find available prompts |
resource://prompt/{id} |
Full prompt with metadata + template | Inspect a specific prompt |
resource://prompt/{id}/template |
Raw template content only | Minimal token usage |
resource://gate/ |
All gates (minimal metadata) | Discovery - find available gates |
resource://gate/{id} |
Gate definition + guidance | Inspect a specific gate |
resource://gate/{id}/guidance |
Raw guidance content only | Minimal token usage |
resource://methodology/ |
All frameworks (name, enabled) | Discovery - find methodologies |
resource://methodology/{id} |
Framework config + system prompt | Inspect methodology details |
Observability Resources (Sessions, Metrics)
| URI Pattern | Returns | Use Case |
|---|---|---|
resource://session/ |
Active chain sessions | Context recovery - what chains are running? |
resource://session/{chainId} |
Session state + progress | Inspect chain for resumption |
resource://metrics/pipeline |
Execution analytics (lean) | Observability - system health |
Note: Session URIs use the user-facing
chainId(e.g.,chain-quick_decision#1) — the same identifier used to resume chains withchain_idparameter.
Token Efficiency
Resources are 4-30x more token efficient than equivalent tool calls:
| Operation | Tool Call | Resource | Savings |
|---|---|---|---|
| List 80 prompts | ~4500 chars | ~2800 chars | 38% |
| List 13 gates | ~600 chars | ~400 chars | 33% |
| List 5 methodologies | ~350 chars | ~200 chars | 43% |
| Pipeline metrics | ~15KB (raw samples) | ~500 bytes | 97% |
After compaction or long tasks, use session resources to recover chain context:
# List active chains (what am I working on?)
ReadMcpResourceTool uri="resource://session/"
# Response shows chainId for direct resumption:
# [{ "uri": "resource://session/chain-quick_decision#1", "name": "chain-quick_decision#1", ... }]
# Get details for a specific chain
ReadMcpResourceTool uri="resource://session/chain-quick_decision#1"
# Resume the chain directly using the chainId
prompt_engine(chain_id:"chain-quick_decision#1", user_response:"your output here")Example Usage (MCP Protocol)
// List all prompts
{"method": "resources/list"}
// Read a specific prompt
{"method": "resources/read", "params": {"uri": "resource://prompt/code_review"}}
// Read just the template
{"method": "resources/read", "params": {"uri": "resource://prompt/code_review/template"}}
// Check active sessions (context recovery)
{"method": "resources/read", "params": {"uri": "resource://session/"}}
// Get pipeline metrics
{"method": "resources/read", "params": {"uri": "resource://metrics/pipeline"}}Both MCP Resources and resource_manager tool can list/inspect content. Use the right one:
| Need | Use | Why |
|---|---|---|
| Discovery (list, browse) | Resources | 4-30x fewer tokens |
| Inspection (read details) | Resources | Direct URI, no params needed |
| Context recovery | Resources | Sessions use chainId directly |
| Create/Update/Delete | Tools | Resources are read-only |
| Filtered search | Tools | filter:"category:analysis" supported |
| Client lacks resources support | Tools | Fallback compatibility |
Default rule: Resources for READ, Tools for WRITE.
# ✅ Preferred: Use resources for discovery
ReadMcpResourceTool uri="resource://prompt/"
# ⚠️ Fallback: Use tools only if resources unavailable or need filtering
resource_manager(resource_type:"prompt", action:"list", filter:"category:analysis")
# ✅ Required: Use tools for mutations
resource_manager(resource_type:"prompt", action:"create", id:"my-prompt", ...)When prompts or gates are modified (via resource_manager or file changes), connected clients receive a notifications/resources/list_changed event. Use this to refresh cached resource lists.
The workhorse. Takes a command, resolves the prompt, applies frameworks/gates, returns structured instructions.
prompt_engine(command:"[modifiers] [framework] prompt_id [args] [gates]")Real examples:
# Simple prompt execution
prompt_engine(command:"code_review file:'api.ts'")
# With framework methodology
prompt_engine(command:"@CAGEERF security_audit target:'auth module'")
# With inline quality gates
prompt_engine(command:"research topic:'LLMs' :: 'cite sources, note confidence'")
# Full chain with everything
prompt_engine(command:"@ReACT analysis --> synthesis --> report :: 'include data'")| Operator | Syntax | Example | Purpose |
|---|---|---|---|
| Framework | @NAME |
@CAGEERF prompt |
Apply methodology |
| Chain | --> |
step1 --> step2 |
Sequential execution |
| Delegation | ==> |
step1 ==> step2 |
Hand off step to sub-agent |
| Repetition | * N |
>>prompt * 3 |
Repeat with same args (chain shorthand) |
| Gate (anon) | :: "text" |
:: 'cite sources' |
Anonymous quality criteria |
| Gate (named) | :: id:"text" |
:: security:"no secrets" |
Named gate with trackable ID |
| Style | #id |
#analytical |
Response formatting |
Repetition (* N) - Same Arguments:
The * N operator unfolds to a chain with identical arguments on each step:
# Expansion: >>brainstorm topic:'ideas' --> >>brainstorm topic:'ideas' --> ...
prompt_engine(command:">>brainstorm * 5 topic:'startup ideas'")
# Mid-chain repetition: >>analyze --> >>analyze --> >>summarize
prompt_engine(command:">>analyze * 2 --> >>summarize")
# Each iteration uses the same plan_path
prompt_engine(command:">>strategicImplement * 3 plan_path:'./plan.md'")Varied Arguments per Step (use explicit chain):
For different arguments on each step, use explicit --> chain syntax instead:
# Different topics per research step
prompt_engine(command:">>research topic:'A' --> >>research topic:'B' --> >>compare")
# Different inputs per validation step
prompt_engine(command:">>validate input:'step1' --> >>validate input:'step2' --> >>synthesize")Repetition vs Chain Decision:
| Pattern | Syntax | Use When |
|---|---|---|
| Same-args | >>p * N |
Same task repeated for variety (brainstorming, validation) |
| Varied-args | >>p arg1 --> >>p arg2 |
Different inputs per step |
Context propagation: Each chain step receives the previous step's output automatically, regardless of whether you use
* Nor explicit chains.
Style examples:
# Apply analytical style to a report
prompt_engine(command:"#analytical report topic:'Q4 metrics'")
# Combine style with framework
prompt_engine(command:"#procedural @CAGEERF tutorial subject:'React hooks'")
# Available styles: analytical, procedural, creative, reasoning| Modifier | Effect |
|---|---|
%clean |
No framework/gate injection |
%lean |
Gates only, skip framework |
%judge |
Show guidance menu, don't execute |
%framework |
Framework only, skip gates |
# Skip all injection for quick iteration
prompt_engine(command:"%clean my_prompt input:'test'")
# Get framework/gate recommendations without executing
prompt_engine(command:"%judge analysis_report")Parameters
| Parameter | Type | Purpose |
|---|---|---|
command |
string | Prompt ID with operators and arguments |
chain_id |
string | Resume token for continuing chains |
user_response |
string | Your output from previous step (for chain resume) |
gate_verdict |
string | Gate review verdict. Preferred: GATE_REVIEW: PASS/FAIL - reason. Also accepts GATE PASS/FAIL - reason or minimal PASS/FAIL - reason (minimal only via gate_verdict, not parsed from user_response). Rationale required. |
gate_action |
enum | retry, skip, or abort after gate failure |
gates |
array | Quality gates (IDs, quick checks, or full definitions) |
force_restart |
boolean | Restart chain from step 1 |
options |
object | Optional execution hints. Supports client_profile (clientFamily, clientId, clientVersion, delegationProfile) to help delegation strategy selection when transport metadata is unavailable. |
For step schemas, input mapping, and retries, see the Chain Schema Reference.
Start a chain:
prompt_engine(command:"research topic:'security' --> analysis --> recommendations")Resume a chain (after completing a step):
prompt_engine(
chain_id:"chain-research#2",
user_response:"Step 1 complete. Key findings: ..."
)Handle gate reviews:
prompt_engine(
chain_id:"chain-research#2",
gate_verdict:"GATE_REVIEW: PASS - All sources cited"
)
**Combined resume (recommended for token efficiency):**
```bash
prompt_engine(
chain_id:"chain-research#2",
user_response:"Step 2 output...",
gate_verdict:"GATE_REVIEW: PASS - criteria met"
)Notes:
- Verdicts are only read from
gate_verdict; they are not parsed fromuser_response. - On PASS without an existing review, the chain continues; on FAIL, a review screen is created with context. Use
gate_action:"retry|skip|abort"when retries are exhausted.
### Gates: Four Ways to Validate
For gate configuration, enforcement modes, and custom definitions, see the [Gate Configuration Reference](../reference/gate-configuration.md).
```bash
# 1. Anonymous inline criteria (simplest)
prompt_engine(command:"report :: 'cite sources, include confidence levels'")
# 2. Named inline gates (with trackable IDs)
prompt_engine(command:"code_review :: security:'no secrets' :: perf:'O(n) or better'")
# Creates gates with IDs "security" and "perf" for tracking in output
# 3. Registered gate IDs
prompt_engine(command:"analysis", gates:["technical-accuracy", "research-quality"])
# 4. Quick gates (recommended for dynamic validation)
prompt_engine(command:"code_review", gates:[
{"name": "Test Coverage", "description": "All functions have unit tests"},
{"name": "Error Handling", "description": "Proper try/catch patterns"}
])
Named inline gates (:: id:"criteria") are useful when you want:
- Trackable gate IDs in output (shows as "security" not "Inline Validation Criteria")
- Multiple distinct validation criteria in one command
- Self-documenting commands that LLMs can parse unambiguously
Ground-truth validation via shell command exit codes. Exit 0 = PASS, non-zero = FAIL.
# Basic verification
prompt_engine(command:">>implement :: verify:'npm test'")
# With preset (controls retry limits)
prompt_engine(command:">>fix-bug :: verify:'pytest' :full")
# Presets: :fast (1 attempt), :full (5), :extended (10)
prompt_engine(command:">>refactor :: verify:'cargo test' :extended")
# Explicit options override presets
prompt_engine(command:">>feature :: verify:'npm test' max:8 timeout:120")
# Autonomous loop (Stop hook integration)
prompt_engine(command:">>bugfix :: verify:'npm test' :full loop:true")How it works:
- Command runs after each response
- If FAIL + attempts remain → bounce-back (Claude retries automatically)
- If FAIL + max reached → escalation (user chooses
retry/skip/abortviagate_action) - With
loop:true→ Stop hook blocks completion until tests pass
Presets:
| Preset | Attempts | Timeout |
|---|---|---|
:fast |
1 | 30s |
:full |
5 | 5 min |
:extended |
10 | 10 min |
Options:
| Option | Description |
|---|---|
max:N |
Override max attempts |
timeout:N |
Override timeout in seconds |
loop:true |
Enable autonomous Stop hook integration |
See Ralph Loops Guide for advanced patterns including context isolation and checkpoints.
These work without defining prompts:
prompt_engine(command:">>listprompts") # List all prompts
prompt_engine(command:">>help") # Show help
prompt_engine(command:">>status") # Server status
prompt_engine(command:">>gates") # List canonical gates
prompt_engine(command:">>gates security") # Search gates by keyword
prompt_engine(command:">>guide gates") # Gate syntax referencePrompts can include script tools that auto-trigger when user args match the tool's JSON schema. This enables wizard-style meta-prompts.
Two-Phase UX:
| Phase | What Happens | Example |
|---|---|---|
| Design | Args don't match schema → Template shows guidance | >>create_gate name:"Code Quality" |
| Validation | Args match schema → Script runs, results in template | >>create_gate id:"code-quality" name:"Code Quality" type:"validation" description:"..." |
| Auto-Execute | Script returns valid: true → MCP tool called |
Creates gate via resource_manager |
Design phase (missing required fields — shows guidance):
prompt_engine(command:">>create_gate name:'Code Quality'")
# Result: Template renders design guidance with field descriptionsValidation phase (all required fields — script runs):
prompt_engine(command:">>create_gate id:'code-quality' name:'Code Quality' type:'validation' description:'Ensures code meets standards' guidance:'Check naming, error handling, tests'")
# Result: Script validates → returns {valid: true, auto_execute: {...}} → gate createdAvailable meta-prompts:
>>create_gate— Quality gate authoring>>create_prompt— Prompt/chain authoring>>create_methodology— Framework authoring
See Script Tools Guide for building your own.
Tip
New to prompts? The Build Your First Prompt tutorial gets you from zero to a working prompt in under 5 minutes.
Create, update, delete, and manage prompts, gates, and methodologies through a single unified interface.
resource_manager(resource_type:"prompt|gate|methodology", action:"...", ...)| Type | Description | Specific Actions |
|---|---|---|
prompt |
Template and chain management | analyze_type, analyze_gates, guide |
gate |
Quality validation criteria | — |
methodology |
Execution frameworks | switch |
All resource types support these actions:
| Action | Purpose | Required Params | Note |
|---|---|---|---|
list |
List all resources | — | Prefer resource:// URIs |
inspect |
Get resource details | id |
Prefer resource:// URIs |
create |
Create new resource | id, type-specific |
|
update |
Modify existing resource | id, fields to update |
|
delete |
Remove resource | id, confirm:true |
|
reload |
Hot-reload from disk | id (optional) |
|
history |
View version history | id |
|
rollback |
Restore previous version | id, version, confirm:true |
|
compare |
Compare two versions | id, from_version, to_version |
Note: For
listandinspect, prefer MCP Resources (4-30x more token efficient). Use tool actions as fallback when filtering is needed or client doesn't support resources.
# List all prompts (prefer resources)
ReadMcpResourceTool uri="resource://prompt/"
# Filter by category (use tools when filtering needed)
resource_manager(resource_type:"prompt", action:"list", filter:"category:analysis")
# Get prompt details (prefer resources)
ReadMcpResourceTool uri="resource://prompt/security_audit"
# Create a prompt
resource_manager(
resource_type:"prompt",
action:"create",
id:"weekly_report",
name:"Weekly Report Generator",
category:"reporting",
description:"Generates formatted weekly status report",
user_message_template:"Generate a weekly report for {{team}} covering {{date_range}}",
arguments:[
{"name":"team", "required":true},
{"name":"date_range", "required":true}
]
)
# Update a prompt
resource_manager(resource_type:"prompt", action:"update", id:"weekly_report", description:"Updated")
# Delete a prompt
resource_manager(resource_type:"prompt", action:"delete", id:"old_prompt", confirm:true)
# Get execution type recommendation
resource_manager(resource_type:"prompt", action:"analyze_type", id:"my_prompt")
# Get gate suggestions
resource_manager(resource_type:"prompt", action:"analyze_gates", id:"my_prompt")# List all gates (prefer resources for discovery)
ReadMcpResourceTool uri="resource://gate/"
# Inspect gate (prefer resources — includes inline guidance)
ReadMcpResourceTool uri="resource://gate/source-verification"
# Guidance only (resources-exclusive)
ReadMcpResourceTool uri="resource://gate/source-verification/guidance"
# Create a gate (tools required)
resource_manager(
resource_type:"gate",
action:"create",
id:"source-verification",
name:"Source Verification",
gate_type:"validation",
description:"Ensures all claims are properly sourced",
guidance:"All factual claims must cite sources. No unsourced statistics.",
pass_criteria:["All claims have citations", "Sources are authoritative"]
)
# Update a gate
resource_manager(resource_type:"gate", action:"update", id:"source-verification", guidance:"Updated guidance...")
# Delete a gate
resource_manager(resource_type:"gate", action:"delete", id:"old-gate", confirm:true)# List all methodologies (prefer resources for discovery)
ReadMcpResourceTool uri="resource://methodology/"
# Inspect methodology (prefer resources — full content)
ReadMcpResourceTool uri="resource://methodology/cageerf"
# Switch active methodology (tools required)
resource_manager(resource_type:"methodology", action:"switch", id:"react", persist:true)
# Create a custom methodology
resource_manager(
resource_type:"methodology",
action:"create",
id:"my-method",
name:"My Custom Methodology",
description:"A custom problem-solving framework",
system_prompt_guidance:"Apply my methodology systematically...",
phases:[
{"id":"phase1", "name":"Define", "description":"Define the problem"},
{"id":"phase2", "name":"Solve", "description":"Implement solution"}
]
)Key Parameters by Resource Type
Prompt Parameters:
| Parameter | Purpose |
|---|---|
category |
Prompt category tag |
user_message_template |
Prompt body with {{variables}} |
system_message |
Optional system message |
arguments |
Array of {name, required, description} |
chain_steps |
Chain step definitions |
gate_configuration |
Gate include/exclude lists |
Gate Parameters:
| Parameter | Purpose |
|---|---|
gate_type |
validation (pass/fail) or guidance (advisory) |
guidance |
Gate criteria content |
pass_criteria |
Array of success conditions |
activation |
When gate activates (categories, frameworks) |
Methodology Parameters:
| Parameter | Purpose |
|---|---|
system_prompt_guidance |
Injected guidance content |
phases |
Array of phase definitions |
gates |
Gate include/exclude configuration |
persist |
Save switch to config (for switch action) |
Tip
Full schema reference: Prompt Schema · Chain Schema · Gate Configuration
Runtime configuration and monitoring.
# Server health check
system_control(action:"status")
# List available frameworks
system_control(action:"framework", operation:"list")
# Switch active framework
system_control(action:"framework", operation:"switch", framework:"ReACT")
# View execution analytics
system_control(action:"analytics", show_details:true)
# List available gates
system_control(action:"gates", operation:"list")| Action | Operations | Purpose |
|---|---|---|
status |
— | Runtime overview |
framework |
list, switch, enable, disable |
Methodology management |
gates |
list, enable, disable, status |
Gate management |
analytics |
— | Execution metrics |
config |
— | View config overlays |
changes |
list |
Resource change audit log |
Know what changed, when, and how. The server logs every prompt and gate modification—whether from MCP tools, filesystem edits, or external processes.
# View recent changes
system_control(action:"changes", operation:"list")
# Filter by source (who made the change)
system_control(action:"changes", operation:"list", source:"filesystem")
system_control(action:"changes", operation:"list", source:"mcp-tool")
# Filter by resource type
system_control(action:"changes", operation:"list", resourceType:"prompt")
system_control(action:"changes", operation:"list", resourceType:"gate")
# Filter by time
system_control(action:"changes", operation:"list", since:"2026-01-20T00:00:00Z")
# Limit results
system_control(action:"changes", operation:"list", limit:10)Change Sources:
| Source | Meaning |
|---|---|
filesystem |
Hot-reload detected file change |
mcp-tool |
Created/updated via resource_manager |
external |
Changed while server was down (on startup) |
Why this matters: Debug sync issues between your editor and the server. Track which prompts changed during a session. Audit who modified what before a deploy.
The server injects guidance into prompts. Control this per-execution or globally.
| Type | What It Adds | Default |
|---|---|---|
system-prompt |
Framework methodology | Every 2 steps |
gate-guidance |
Quality criteria | Every step |
style-guidance |
Response formatting | First step only |
# Full injection (default for new analysis)
prompt_engine(command:"%guided @CAGEERF audit_plan topic:'security'")
# No injection (follow-up in same context)
prompt_engine(command:"%clean next_step input:'data'")
# Gates only (skip framework reminder)
prompt_engine(command:"%lean code_review file:'api.ts'"){
"injection": {
"system-prompt": {
"enabled": true,
"frequency": { "mode": "every", "interval": 2 }
},
"gate-guidance": {
"enabled": true,
"frequency": { "mode": "every", "interval": 1 }
}
}
}When a chain pauses for gate review, respond with a verdict:
prompt_engine(
chain_id:"chain-analysis#2",
gate_verdict:"GATE_REVIEW: PASS - All criteria met"
)Accepted formats (case-insensitive):
| Format | Example |
|---|---|
| Full | GATE_REVIEW: PASS - reason |
| Full (colon) | GATE_REVIEW: FAIL: reason |
| Simplified | GATE PASS - reason |
| Minimal* | PASS - reason |
*Minimal format only works via gate_verdict parameter, not in user_response.
Requirements:
- Rationale is always required
gate_verdicttakes precedence over parseduser_response
| Problem | Fix |
|---|---|
| Prompt not found | Run resource_manager(resource_type:"prompt", action:"list") to see available IDs |
| Edits not showing | Run resource_manager(resource_type:"prompt", action:"reload") |
| Chain stuck | Use force_restart:true or check system_control(action:"status") |
| Framework not switching | Use resource_manager(resource_type:"methodology", action:"switch") |
| Gate keeps failing | Use gate_action:"skip" to bypass, or gate_action:"retry" |
# 1. Create
resource_manager(resource_type:"prompt", action:"create", id:"my_prompt", ...)
# 2. Reload
resource_manager(resource_type:"prompt", action:"reload")
# 3. Test
prompt_engine(command:"my_prompt arg:'value'")
# 4. Iterate
resource_manager(resource_type:"prompt", action:"update", id:"my_prompt", ...)# 1. Start chain with framework
prompt_engine(command:"@CAGEERF research topic:'X' --> analysis --> report")
# 2. Complete step 1, resume
prompt_engine(chain_id:"chain-research#1", user_response:"Research complete: ...")
# 3. Handle gate review if needed
prompt_engine(chain_id:"chain-research#2", gate_verdict:"GATE_REVIEW: PASS - Sources verified")
# 4. Continue to completion
prompt_engine(chain_id:"chain-research#3", user_response:"Analysis complete: ...")# Check current
system_control(action:"status")
# Switch
system_control(action:"framework", operation:"switch", framework:"5W1H")
# Execute with new framework
prompt_engine(command:"investigation target:'incident'")Version History
All resources (prompts, gates, methodologies) automatically track version history. Each update saves a snapshot before changes, enabling rollback and comparison.
Enable/disable in config.json:
{
"versioning": {
"enabled": true,
"max_versions": 50,
"auto_version": true
}
}| Setting | Default | Purpose |
|---|---|---|
enabled |
true |
Enable version tracking globally |
max_versions |
50 |
Maximum versions retained (FIFO pruning) |
auto_version |
true |
Auto-save on updates (can skip per-call) |
# View version history for a prompt
resource_manager(resource_type:"prompt", action:"history", id:"my_prompt")
# View with limit
resource_manager(resource_type:"prompt", action:"history", id:"my_prompt", limit:10)
# Same for gates and methodologies
resource_manager(resource_type:"gate", action:"history", id:"code-quality")
resource_manager(resource_type:"methodology", action:"history", id:"cageerf")Output: Table showing version number, date, changes summary, and description.
# Rollback a prompt to version 3
resource_manager(
resource_type:"prompt",
action:"rollback",
id:"my_prompt",
version:3,
confirm:true
)Safety: Current state is automatically saved as a new version before rollback. You can always rollback-from-rollback.
# Compare version 1 to version 5
resource_manager(
resource_type:"prompt",
action:"compare",
id:"my_prompt",
from_version:1,
to_version:5
)Output: Unified diff showing additions (+) and removals (-) between versions.
For bulk updates or minor edits, skip automatic version save:
resource_manager(
resource_type:"prompt",
action:"update",
id:"my_prompt",
description:"Minor typo fix",
skip_version:true
)Version history is stored in .history.json sidecar files alongside each resource:
resources/prompts/
├── development/
│ └── my_prompt/
│ ├── prompt.yaml
│ └── .history.json # Version history
resources/gates/
├── code-quality/
│ ├── gate.yaml
│ └── .history.json
CLI Configuration
Override resource paths via CLI flags or environment variables.
All flags accept both --flag=value and --flag value formats.
node dist/index.js --transport stdio \
--prompts /path/to/prompts \
--gates /path/to/gates \
--methodologies /path/to/methodologies \
--styles /path/to/styles \
--scripts /path/to/scripts \
--workspace /path/to/workspace \
--config /path/to/config.json| Transport | Flag | Use Case |
|---|---|---|
| STDIO | --transport=stdio |
Claude Desktop, Claude Code |
| Streamable HTTP | --transport=streamable-http |
Web dashboards, remote APIs (use this for HTTP) |
| SSE (deprecated) | --transport=sse |
Legacy integrations |
| Dual mode | --transport=both |
STDIO + SSE simultaneously |
For HTTP clients, use Streamable HTTP. It's the current MCP standard and replaces SSE.
| Variable | Description |
|---|---|
MCP_RESOURCES_PATH |
Base path for all resources (prompts/, gates/, etc.) |
MCP_PROMPTS_PATH |
Override prompts directory |
MCP_GATES_PATH |
Override gates directory |
MCP_METHODOLOGIES_PATH |
Override methodologies directory |
MCP_STYLES_PATH |
Override styles directory |
MCP_SCRIPTS_PATH |
Override scripts directory |
MCP_WORKSPACE |
Workspace root for config resolution |
MCP_CONFIG_PATH |
Override config.json path |
Path resolution follows this priority (first match wins):
- CLI flags —
--prompts /path(highest priority, explicit override) - Individual env vars —
MCP_PROMPTS_PATH(per-resource override) - Unified env var —
MCP_RESOURCES_PATH/prompts/(all resources) - Package defaults —
server/resources/prompts/(lowest priority)
Example: MCP config with custom resources
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"],
"env": {
"MCP_RESOURCES_PATH": "/home/user/my-resources"
}
}
}
}Tip
Something not working? The Troubleshooting Guide covers common issues with server startup, client connections, chains, and gates.
| Component | Location |
|---|---|
| Prompt definitions | server/resources/prompts/{category}/{id}/prompt.yaml |
| Gate definitions | server/resources/gates/{id}/gate.yaml |
| Style definitions | server/resources/styles/{id}/style.yaml |
| Methodologies | server/resources/methodologies/{id}/methodology.yaml |
| Chain sessions | SQLite (runtime-state/state.db, table chain_sessions) |
| Resource changes | runtime-state/resource-changes.jsonl |
| Server config | server/config.json |
Related docs:
- Prompt Authoring — Tutorial
- Prompt Schema — Configuration reference
- Chain Schema — Chain configuration
- Gate Configuration — Gate configuration
- Architecture — System internals
- Script Tools — Prompt-scoped script tool configuration