An Model Context Protocol (MCP) server for deterministic prompt optimization in Claude Code. Score prompts across 7 quality dimensions, auto-select from 11 Anthropic techniques, and return a structural scaffold. No LLM calls, no network, sub-millisecond.
Requirements:
From shell:
claude mcp add claude-orator-mcp -- npx claude-orator-mcpFrom inside Claude (restart required):
Add this to our global mcp config: npx claude-orator-mcp
Install this mcp: https://github.com/Vvkmnn/claude-orator-mcp
From any manually configurable mcp.json: (Cursor, Windsurf, etc.)
{
"mcpServers": {
"claude-orator-mcp": {
"command": "npx",
"args": ["claude-orator-mcp"],
"env": {}
}
}
}There is no npm install required -- no external dependencies or databases, only deterministic heuristics.
However, if npx resolves the wrong package, you can force resolution with:
npm install -g claude-orator-mcpOptionally, install the skill to teach Claude when to proactively optimize prompts:
npx skills add Vvkmnn/claude-orator-mcp --skill claude-orator --global
# Optional: add --yes to skip interactive prompt and install to all agentsThis makes Claude automatically optimize prompts before dispatching subagents, writing system prompts, or crafting any prompt worth improving. The MCP works without the skill, but the skill improves discoverability.
For automatic prompt optimization with hooks and commands, install from the claude-emporium marketplace:
/plugin marketplace add Vvkmnn/claude-emporium
/plugin install claude-orator@claude-emporiumThe claude-orator plugin provides:
Hooks (fires before subagent dispatch):
- Before Task -- Suggest prompt optimization before launching agents
Commands: /reprompt-orator
Requires the MCP server installed first. See the emporium for other Claude Code plugins and MCPs.
MCP server with a single tool. Prompt in, optimized prompt out.
Analyze a prompt across 7 quality dimensions, auto-select from 11 Anthropic techniques, and return a structurally optimized scaffold with before/after scores.
orator_optimize prompt="Write a function that sorts users"
> Returns optimized scaffold with XML tags, output format, examples section
orator_optimize prompt="You are a helpful assistant" intent="system"
> Returns role-assigned system prompt with structure and constraints
orator_optimize prompt="Extract all emails from this text" techniques=["xml-tags", "few-shot"]
> Force-applies specific techniques regardless of auto-selection
Score meter (gradient fill bar):
πͺΆ 3.2 βββββββββββ 7.8
+xml-tags +few-shot +structured-output Β· 3 issues
Wrapped in XML tags, added examples, specified output format
Three-zone bar: βββ (baseline) βββββ (improvement) ββ (headroom to 10).
Minimal case (already well-structured):
πͺΆ ββ already well-structured (8.4)
Input:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | The raw prompt to optimize |
intent |
enum | No | code | analysis | creative | extraction | conversation | system (auto-detected) |
target |
enum | No | claude-code | claude-api | claude-desktop | generic (default: claude-code) |
techniques |
string[] | No | Force-apply specific technique IDs |
Output:
| Field | Type | Description |
|---|---|---|
optimized_prompt |
string | Rewritten prompt scaffold (primary output) |
score_before |
number | Quality score of original (0-10) |
score_after |
number | Quality score after optimization (0-10) |
summary |
string | 1-line explanation of improvements |
detected_intent |
string | Auto-detected intent category |
applied_techniques |
string[] | Technique IDs applied |
issues |
string[] | Detected problems |
suggestions |
string[] | Actionable fixes |
The optimized_prompt is a structural scaffold. Claude refines it with domain knowledge, codebase context, and conversation history.
How claude-orator-mcp works:
πͺΆ claude-orator-mcp
ββββββββββββββββββββ
orator_optimize
ββββββββββββββ
PROMPT
β
ββββββββββββββ΄βββββββββββββ
βΌ βΌ
βββββββββββββ ββββββββββββββ
β Detect β β Measure β
β Intent β β Complexity β
βββββββ¬ββββββ ββββββββ¬ββββββ
β β
system > code > word count +
extraction > clause depth
analysis > β
creative > β
conversation β
+ disambiguation β
+ fallback heuristics β
β β
ββββββββββββββ¬βββββββββββββ
β
βΌ
βββββββββββββββββββββ
β Score Before β
β β
β clarity 20% β strong verbs, single task
β specificity 20% β named tech, constraints
β structure 15% β XML tags, headers, lists
β examples 15% β input/output pairs
β constraints 10% β scope, edge cases
β output_fmt 10% β format specification
β efficiency 10% β no filler, no redundancy
β β
β ββββββββββ 3.2 β
ββββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββββ techniques?
β Select Techniques ββββββ (force override)
β β
β when_to_use() Γ β 11 predicates
β intent match Γ β filtered
β score gaps Γ β sorted by impact
β cap at 4 β
ββββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββββ
β Template Assembly β
β β
β role preamble β expert identity
β β <context> β grounding data first
β β <task> β XML-wrapped prompt
β β <requirements> β constraints + gaps
β β <examples> β multishot I/O pairs
β β output format β format specification
ββββββββββ¬βββββββββββ
β
βΌ
βββββββββββββββββββββ
β Score After β
β β
β ββββββββββββ 7.8β
ββββββββββ¬βββββββββββ
β
βΌ
OUTPUT
optimized_prompt
+ scores + techniques
+ issues + suggestions
score meter (gradient fill bar):
βββββββββββββββββββββββββββββββββ
πͺΆ 3.2 βββββββββββ 7.8
+xml-tags +few-shot +structured-output
Wrapped in XML, added examples, format
βββ baseline βββ improvement ββ headroom
7 quality dimensions (weighted scoring, deterministic):
| Dimension | Weight | Measures |
|---|---|---|
| Clarity | 20% | Strong verbs, single task, no hedging |
| Specificity | 20% | Named tech, numbers, constraints |
| Structure | 15% | XML tags, headers, lists |
| Examples | 15% | Input/output pairs, demonstrations |
| Constraints | 10% | Negative constraints, scope, edge cases |
| Output Format | 10% | Format spec, structure definition |
| Token Efficiency | 10% | No filler, no redundancy |
11 Anthropic techniques (auto-selected based on intent, scores, and complexity):
| ID | Name | Auto-selected when |
|---|---|---|
chain-of-thought |
Let Claude Think | Analysis intent, complex tasks |
xml-tags |
Use XML Tags | Long prompt + low structure score |
few-shot |
Multishot Examples | Low example score + extraction/code |
role-assignment |
System Prompts & Roles | System intent or low specificity |
structured-output |
Control Output Format | Low output format score |
prefill |
Structured Output Format | API target + extraction/code |
prompt-chaining |
Chain Complex Tasks | Complex + multiple subtasks |
uncertainty-permission |
Say "I Don't Know" | Analysis or extraction intent |
extended-thinking |
Extended Thinking | Complex + analysis/code intent |
long-context-tips |
Long Context | Long prompt (>2000 chars or >50 lines) |
tool-use |
Tool Use | Prompt mentions tool/function calling |
Core algorithms:
- Intent detection (
detectIntent): Priority-ordered regex patterns across 6 categories:system > code > extraction > analysis > creative > conversation. Includes disambiguation (e.g.,system+codesignals resolves tocode) and fallback heuristics for code blocks, "build me" patterns, and debugging language. - Heuristic scoring (
scorePrompt): 7-dimension weighted analysis. Each dimension 0-10, overall is weighted sum. Also generates flatissues[]andsuggestions[]arrays. - Technique selection (
selectTechniques): Each technique has awhen_to_use()predicate. Auto-selected based on intent + scores + complexity. Sorted by impact, capped at 4. - Template assembly (
optimize): Builds structural scaffold from selected techniques. Context-first ordering: role β<context>β<task>β<requirements>β<examples>β output format.
Design principles:
- Single tool: one entry point, minimal cognitive overhead
- Deterministic: same input, same output. No LLM calls, no network
- Scaffold, not final: the optimized prompt is structural; Claude adds substance
- Lean output: flat string arrays for issues/suggestions, no nested objects
- Weighted dimensions: clarity and specificity matter most (20% each)
- Technique cap: max 4 techniques per optimization (diminishing returns beyond)
- Anti-pattern detection: 12 Claude-specific anti-patterns + 20 industry patterns from 34 production AI tools
- Zero dependencies: only
@modelcontextprotocol/sdk+zod
Every existing prompt optimization tool requires LLM calls, labeled datasets, or evaluation infrastructure. When you need structural improvement at zero latency (CI/CD, subagent dispatch, offline), they cannot help.
| Feature | orator | DSPy | promptfoo | TextGrad | OPRO | LLMLingua | Anthropic Generator |
|---|---|---|---|---|---|---|---|
| Zero latency | Yes (<1ms) | No (LLM calls) | No (eval runs) | No (LLM calls) | No (LLM calls) | No (LLM calls) | No (LLM call) |
| Offline/airgapped | Yes | No | Partial | No | No | No | No |
| Deterministic | Yes | No | No | No | No | Partial | No |
| No labeled data | Yes | No (examples) | No (test cases) | No (feedback) | No (examples) | Yes | Yes |
| Claude-specific | Yes (anti-patterns) | No | No | No | No | No | Yes |
| MCP native | Yes | No | No | No | No | No | No |
| Structural scoring | 7 dimensions | None | Custom metrics | None | None | None | None |
| Dependencies | 0 (pure TS) | PyTorch + LLM | Node + LLM | PyTorch + LLM | LLM | PyTorch + LLM | LLM API |
DSPy: Stanford's framework for compiling LM programs with automatic prompt optimization. Requires labeled examples, LLM calls for optimization, and PyTorch. Optimizes for task accuracy, not structural quality. Latency: seconds to minutes per optimization. Use DSPy when you have labeled data and want to tune for a specific metric.
promptfoo: Test-driven prompt evaluation framework. Requires test cases, LLM calls for evaluation, and an evaluation dataset. Measures output quality, not prompt structure. Complementary: use Orator for structural scaffolding, then promptfoo to evaluate output quality.
TextGrad: Automatic differentiation via text feedback from LLMs. Requires LLM calls for both forward and backward passes. Research-oriented, PyTorch dependency. Latency: minutes. Use when iterating on prompt wording with measurable objectives.
OPRO: DeepMind's optimization by prompting. Uses an LLM to iteratively rewrite prompts. Requires examples of good/bad outputs, multiple LLM calls per iteration. Latency: minutes. Use when exploring creative prompt variations with evaluation feedback.
LLMLingua: Microsoft's prompt compression via perplexity-based token removal. Reduces token count by 2-20x but requires a local LLM for perplexity scoring. Different goal: compression, not structural improvement. Use when context window is the bottleneck.
Anthropic Prompt Generator: Anthropic's own tool that generates prompts via Claude. Excellent quality but requires an LLM call, non-deterministic, and not available offline or via MCP. Use when you want Claude to write your prompt from scratch.
Orator's approach is deliberately different: structural analysis via deterministic heuristics. No LLM calls means no API keys, no latency variance, no cost per optimization, and identical results every run. The trade-off is that Orator optimizes prompt structure (clarity, specificity, constraints, format) rather than prompt wording. It can't tell you if your prompt produces good output, only that it's well-formed for Claude. This makes it complementary to evaluation tools like promptfoo: scaffold with Orator, then validate with eval.
git clone https://github.com/Vvkmnn/claude-orator-mcp && cd claude-orator-mcp
npm install && npm run build
npm testPackage requirements:
- Node.js: >=20.0.0 (ES modules)
- Runtime:
@modelcontextprotocol/sdk,zod - Zero external databases: works with
npx
Development workflow:
npm run build # TypeScript compilation with executable permissions
npm run dev # Watch mode with tsc --watch
npm run start # Run the MCP server directly
npm run lint # ESLint code quality checks
npm run lint:fix # Auto-fix linting issues
npm run format # Prettier formatting (src/)
npm run format:check # Check formatting without changes
npm run typecheck # TypeScript validation without emit
npm run test # Lint + type check + vitest (25 tests)
npm run prepublishOnly # Pre-publish validation (build + lint + format:check)Git hooks (via Husky):
- pre-commit: Auto-formats staged
.tsfiles with Prettier and ESLint
Contributing:
- Fork the repository and create feature branches
- Follow TypeScript strict mode and MCP protocol standards
Learn from examples:
- Official MCP servers for reference implementations
- TypeScript SDK for best practices
- Creating Node.js modules for npm package development
- Anthropic prompt engineering docs for technique details
Industry pattern data derived from deep analysis of system prompts from 34 AI coding tools collected in system-prompts-and-models-of-ai-tools, including Claude Code, Cursor, Windsurf, v0, Devin, Cline, Lovable, Replit, Amp, Gemini, and 25 others. Patterns are curated with prevalence data and embedded β no external dependency or installation required. Cross-referenced with research from the Prompt Report (1,500 papers surveyed) and Anthropic's prompt engineering documentation.
Cicero Denounces Catiline by Cesare Maccari (1889). "Quo usque tandem abutere, Catilina, patientia nostra?" (How long, Catiline, will you abuse our patience?) - Claudius.

