Turn your domain expertise into an AI-accessible knowledge base. This project is a template for building Model Context Protocol (MCP) servers that deliver feedback and guidance to AI assistants. Deploy it on Cloudflare Workers, and AI tools like Claude, Cursor, and others can query your expertise directly.
Cloudflare is well-suited for hosting remote MCP servers. Its Workers platform handles the transport layer, and the agents framework manages persistent client sessions.
This is a template repository. Fork it, customize it with your expertise, then deploy it.
- Why This Matters
- How It Works
- Quick Start
- YAML Schema
- MCP Tools Created
- Adapting for Your Domain
- MCP Client Setup
- Deployment Options
- Privacy and Threat Model
- Prerequisites
- Development
- Author
You have expertise. Maybe you're good at writing, code review, recipe development, or security analysis. AI assistants can help users in these domains, but they lack your specific knowledge and standards.
This toolkit lets you codify that expertise in a YAML file and make it available through MCP. When users connect their AI assistant to your server, the AI can query your guidelines to provide feedback shaped by your expertise.
You might use this to:
- Help users improve their writing with your editorial standards
- Guide developers with your code review criteria
- Share your domain knowledge with anyone who has an MCP-compatible AI tool
To see this approach in action, take a look at how it helps people writing better incident response reports.
┌─────────────────────────────────────────────────────────────────────────┐
│ Your Expertise (YAML) │
│ │
│ Principles, checkpoints, quality checks, and review guidance │
│ codified in a structured format that AI can understand. │
└──────────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ Cloudflare R2 │
│ │
│ Stores your expertise YAML files. Upload one or more .yaml files. │
│ The Worker discovers all files and creates tools for each domain. │
└──────────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ Cloudflare Worker │
│ │
│ Implements the MCP server. Parses YAML and exposes tools. │
│ Uses the @modelcontextprotocol/sdk for protocol handling. │
└──────────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ MCP Clients │
│ (Claude Desktop, Claude Code, Cursor, etc.) │
│ │
│ Tools available to the AI: │
│ • load_{prefix}_context — Get expertise for creating content │
│ • review_{prefix}_content — Get criteria for reviewing content │
│ • get_{prefix}_guidelines — Get formatted guidelines by topic │
│ • get_capabilities — List available tools │
└──────────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ User's Content (stays local) │
│ │
│ The AI analyzes the user's content locally using your guidelines. │
│ User content is not sent to your server — only tool requests. │
└─────────────────────────────────────────────────────────────────────────┘
The privacy model: Your server returns expertise and guidelines. The AI assistant analyzes the user's content locally using those guidelines. User content never leaves their machine.
You can follow these steps manually or point an AI coding tool (Claude Code, Cursor, etc.) at this repo and ask it to set things up.
git clone https://github.com/YOUR_USERNAME/mcp-expertise-toolkit.git
cd mcp-expertise-toolkit
bun install # or: npm installStart with the starter template in content/:
cp content/_starter-template.yaml content/my-domain.yamlEdit the file and replace all [REPLACE: ...] placeholders with your domain expertise. See content/readme-review.yaml and content/bbq-scoring.yaml for complete examples.
Each file has a toolPrefix that determines its tool names. You can deploy one file or multiple files to the same server.
bun run validateThis checks your YAML against the schema and shows what tools will be created.
Cloudflare Workers run your code at the edge, close to users worldwide. R2 is Cloudflare's object storage for your expertise file. Both have generous free tiers.
# Authenticate with Cloudflare
npx wrangler login
# Create R2 bucket for your expertise file
npx wrangler r2 bucket create mcp-expertise-data
# Upload your expertise file(s)
npx wrangler r2 object put mcp-expertise-data/your-domain.yaml \
--file content/your-domain.yaml \
--content-type "text/yaml"
# To upload multiple domains:
# npx wrangler r2 object put mcp-expertise-data/another-domain.yaml \
# --file content/another-domain.yaml --content-type "text/yaml"
# Deploy the Worker
bun run deployThe deploy command outputs your server URL (e.g., https://mcp-expertise-server.YOUR-ACCOUNT.workers.dev).
Visit your Worker URL in a browser. You should see JSON with your domain name and available tools.
Your expertise file has these main sections:
version: "1.0.0"
meta:
domain: "Your Domain" # e.g., "Code Review", "Recipe Feedback"
author: "Your Name"
description: "What this expertise covers"
toolPrefix: "yourdomain" # Creates tools like load_yourdomain_context
principles: # High-level guidelines (3-5 recommended)
- name: "Core Principle"
guidelines:
- "First guideline"
- "Second guideline"
examples: # Optional but helpful
- bad: "Example of poor practice"
good: "Example of good practice"
checkpoints: # Things to verify in content
- id: "section_id"
name: "Section Name"
purpose: "Why this matters"
whatIndicatesPresence: # Semantic descriptions, not keywords
- "Concept to look for"
commonProblems:
- "What goes wrong when missing"
qualityChecks: # Specific issues to flag
issue_type:
whatToCheck: "What to look for"
whyItMatters: "Why this matters"
examples:
- bad: "Example of the problem"
good: "How to fix it"
reviewGuidance: # How to deliver feedback
feedbackStructure:
- "Start with strengths"
- "Be specific"
tone:
- "Collaborative, not critical"See docs/schema-reference.md for the complete format.
Describe concepts, not keywords. AI understands meaning.
# Weak - keyword matching
whatIndicatesPresence:
- "introduction"
- "overview"
# Strong - semantic understanding
whatIndicatesPresence:
- "Clear statement of what the content covers"
- "Explanation of why the reader should care"Based on your meta.toolPrefix, the server creates these tools:
| Tool | Purpose |
|---|---|
load_{prefix}_context |
Load expertise context for creating or improving content |
review_{prefix}_content |
Get review criteria for critiquing existing content |
get_{prefix}_guidelines |
Get formatted guidelines for specific topics |
get_capabilities |
List all available tools |
Parameters for load_{prefix}_context:
| Parameter | Description |
|---|---|
detail_level |
minimal (~2k tokens), standard (~5k tokens), comprehensive (~10k tokens) |
topics |
Specific areas: completeness, quality, principles, categories, requirements, all |
include_examples |
Include good/bad examples (default: false) |
category |
Filter to a specific content category |
See it in action: Each example includes a demo showing a realistic session:
- DEMO-readme-review.md — README review for humans + AI assistants
- DEMO-bbq-scoring.md — BBQ competition judging
To create expertise for code review, recipes, or another domain:
- Identify your principles — What are the 3-5 most important guidelines?
- Define checkpoints — What must good content include?
- List quality checks — What specific issues do you commonly flag?
- Set the tone — How should feedback be delivered?
Once you've thought through these questions, start with content/_starter-template.yaml and fill in your answers.
Example: Code Review
meta:
domain: "Code Review"
toolPrefix: "code"
principles:
- name: "Readability"
guidelines:
- "Code should be understandable without comments"
- "Function names should describe what they do"
checkpoints:
- id: "error_handling"
name: "Error Handling"
purpose: "Ensure failures are handled gracefully"
whatIndicatesPresence:
- "Try-catch blocks around operations that can fail"
- "Meaningful error messages"
commonProblems:
- "Silent failures"
- "Generic error messages"
qualityChecks:
complexity:
whatToCheck: "Functions doing too many things"
whyItMatters: "Complex functions are hard to test and maintain"
examples:
- bad: "Function that fetches, validates, transforms, and saves"
good: "Separate functions for each responsibility"Once deployed, users can connect their AI assistants to your server.
claude mcp add your-expertise --transport http https://YOUR-WORKER.workers.dev/mcpOr add to ~/.claude/settings.local.json:
{
"mcpServers": {
"your-expertise": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://YOUR-WORKER.workers.dev/mcp"]
}
}
}Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"your-expertise": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://YOUR-WORKER.workers.dev/mcp"]
}
}
}Add to your Cursor MCP settings:
{
"mcpServers": {
"your-expertise": {
"url": "https://YOUR-WORKER.workers.dev/mcp"
}
}
}Use the mcp-remote package to connect via the /mcp endpoint (streamable HTTP, recommended) or /sse endpoint (SSE transport, legacy).
The included wrangler.jsonc configures deployment to Cloudflare Workers with R2 storage. Cloudflare's free tier is sufficient for most use cases.
To use a custom domain instead of workers.dev:
- Add your domain to Cloudflare (DNS must be managed by Cloudflare)
- Edit
wrangler.jsonc:
- Deploy:
bun run deploy
The server is standard TypeScript that runs anywhere. To deploy elsewhere:
- Replace the R2 storage calls in
src/index.tswith your storage backend (filesystem, S3, database) - Adapt the HTTP handling for your platform (Express, Hono, Fastify)
- The MCP protocol handling uses
@modelcontextprotocol/sdk, which is platform-agnostic
The bun run dev command runs a local server using Wrangler's emulation of Cloudflare Workers. For a standalone local server without any Cloudflare dependency:
What needs to change:
- Replace R2 storage with filesystem reads (
fs.readFileSync()) - Replace the
McpAgentclass with directMcpServer+ an HTTP framework (Express, Hono, Fastify) - Use standard Node.js/Bun HTTP server instead of Workers fetch handler
What stays the same:
- All type definitions (
src/types.ts) - YAML parsing and Zod validation
- Tool definitions and context builders
- The
@modelcontextprotocol/sdkpackage
The core MCP logic is platform-agnostic. The Cloudflare-specific code is in the HTTP handling and storage layer.
This design keeps user content local. Consider these characteristics before deploying:
| Exposure | Mechanism |
|---|---|
| Your expertise content | Anyone with the URL can call the tools |
| Tool parameters | Server logs may contain checkpoint IDs, topic names |
| Server metadata | Health endpoint reveals domain name, author, tool names |
| Protected | Why |
|---|---|
| User content | AI analyzes locally; content never sent to server |
| User queries | The AI's prompts stay between user and AI provider |
| Usage patterns | No tracking or analytics built in |
- Your expertise is meant to be shared. The YAML content becomes public to anyone who knows the URL.
- No authentication by default. The MCP server accepts connections from any client.
- The AI assistant is trusted. Users trust their AI provider not to leak their content.
- Treat your expertise YAML as public documentation
- Don't embed sensitive information in the YAML
- If you need access control, consider Cloudflare Access
| Requirement | What It's For |
|---|---|
| Node.js 18+ or Bun | Runs the validation script and development server |
| Cloudflare account | Hosts the Worker and R2 bucket. Free tier is sufficient. |
| Wrangler CLI | Deploys the Worker and manages R2. Installed via bun install. |
mcp-expertise-toolkit/
├── content/
│ ├── _starter-template.yaml # Start here: minimal template for your expertise
│ ├── README.md # Guide to creating expertise files
│ ├── readme-review.yaml # Sample: README review for humans + AI
│ ├── bbq-scoring.yaml # Sample: BBQ competition judging
│ ├── DEMO-readme-review.md # Demo session for README review
│ └── DEMO-bbq-scoring.md # Demo session for BBQ scoring
├── src/
│ ├── index.ts # MCP server implementation
│ └── types.ts # TypeScript types and Zod schemas
├── scripts/
│ └── validate-expertise.ts # Validates your YAML
├── docs/
│ └── schema-reference.md # Complete YAML format
├── wrangler.jsonc # Cloudflare Worker config
└── package.json
Sample files included:
| File | Domain | Why It's Interesting |
|---|---|---|
readme-review.yaml |
README review | Expertise for both human readers AND AI coding assistants |
bbq-scoring.yaml |
BBQ competition | Highly specialized criteria (KCBS judging) that generic AI doesn't know |
Multi-domain support: The server automatically discovers all .yaml files in the R2 bucket and creates tools for each. Each file's toolPrefix must be unique. You can deploy a single domain or combine multiple domains in one server.
bun run dev # Local development server (http://localhost:8787)
bun run validate # Validate expertise YAML
bun run type-check # TypeScript checking
bun run deploy # Deploy to Cloudflare| Aspect | MCP Server | Claude Code Skills |
|---|---|---|
| Works with | Any MCP client | Claude Code only |
| Updates | Deploy to server | Sync local files |
| Best for | Teams, shared expertise | Personal workflows |
Use MCP when you want expertise accessible from multiple AI tools or shared across a team.
Tools not appearing after deployment?
-
Check
get_capabilities— Call this tool to see loaded domains. If your expertise file failed validation, a Diagnostics section will show the file name and error:## Diagnostics Some expertise files failed validation: - **my-domain.yaml:** principles: Invalid input: expected array... -
Run local validation — Get detailed error messages:
bun run validate
-
Check R2 upload — Verify your YAML file was uploaded to the R2 bucket
-
Check toolPrefix uniqueness — Each expertise file needs a unique
toolPrefixvalue
| File | Purpose |
|---|---|
src/index.ts |
MCP server: tool definitions, R2 loading, YAML parsing, context builders |
src/types.ts |
TypeScript interfaces and Zod validation schemas for expertise YAML |
content/_starter-template.yaml |
Template for creating new expertise domains |
scripts/validate-expertise.ts |
Validates YAML files against the schema |
wrangler.jsonc |
Cloudflare Worker and R2 bucket configuration |
Expertise YAML → R2 Bucket → Worker (parses YAML, creates MCP tools) → AI Client
↓
Local analysis of user content
- Each
.yamlfile in R2 becomes a set of MCP tools (prefixed bytoolPrefix) - The Worker auto-discovers all YAML files in the bucket
- User content never leaves the client; only expertise/guidelines flow from the server
bun run dev # Local dev server (http://localhost:8787)
bun run validate # Validate expertise YAML against schema
bun run type-check # TypeScript checking
bun run deploy # Deploy to CloudflareLenny Zeltser: Builder of security products and programs. Teacher of those who run them.
{ "routes": [ { "pattern": "expertise.yourdomain.com", "custom_domain": true } ], "workers_dev": false }