A meta-guide curating community Claude Code resources with unique contributions: evidence assessment, SDD methodology, context engineering, security frameworks, and measurement discipline.
Philosophy: We defer to community consensus where it exists (tool discovery, implementation guides) and focus on what we uniquely provide (architectural analysis, integration guidance, evidence validation).
Methodology: We adopt spec-driven development (SDD) as our foundational approach, aligned with industry standards like GitHub Spec Kit and agentskills.io.
🔗 Looking for tool recommendations? See shanraisshan/claude-code-best-practice (5.6k+ stars) for community-curated MCPs, plugins, and productivity tips. See COMMUNITY-RESOURCES.md for our complete community directory.
When you use AI coding agents without structure:
- Inconsistent results across sessions
- Context loss in complex features
- "Works but wrong" implementations
- Difficult to maintain or extend
- Poor team coordination
A spec-driven approach that gives AI agents persistent context through structured artifacts:
your-project/
├── specs/ # Feature specifications (Specify phase)
├── ARCHITECTURE.md # System design (Plan phase)
├── PLAN.md # Current priorities (Tasks phase)
└── .claude/ # Claude Code implementation
├── CLAUDE.md # Project context
├── settings.json # Hook configurations
├── hooks/ # Automation scripts
├── commands/ # Slash commands
└── skills/ # Reusable methodologies
This approach implements the 4-phase SDD model:
- Specify → Define what to build (CLAUDE.md, specs/)
- Plan → Technical design (ARCHITECTURE.md, DECISIONS.md)
- Tasks → Break down work (PLAN.md, TodoWrite)
- Implement → Execute with context (skills, hooks, one feature at a time)
Every project uses the same infrastructure pattern - just choose your tier:
| Tier | When | Time | What You Get |
|---|---|---|---|
| Tier 1: Baseline | All projects | 5 min | Stop hook + permissions |
| Tier 2: Active | Weekly work | 15 min | + CLAUDE.md + SessionStart |
| Tier 3: Team | Collaborators | 30 min | + GitHub Actions + /commit-push-pr |
There's no difference between "new" and "existing" projects - both follow the same tiered approach.
Run this in your project to get baseline protection:
mkdir -p .claude && cat > .claude/settings.json << 'EOF'
{
"permissions": {
"allow": ["Bash(git status*)", "Bash(git diff*)", "Bash(git log*)"]
},
"hooks": {
"Stop": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "bash -c 'if ! git diff --quiet 2>/dev/null; then echo \"⚠️ Uncommitted changes\"; fi'"
}]
}]
}
}
EOFFor Tier 2/3 setup with CLAUDE.md, hooks, and GitHub Actions:
Fetch https://raw.githubusercontent.com/flying-coyote/claude-code-project-best-practices/refs/heads/master/prompts/SETUP-PROJECT.md and follow its instructions.
See Project Infrastructure Pattern for the complete tiered approach.
This repository provides multiple entry points for different scenarios:
| Your Situation | Use This | Why |
|---|---|---|
| I have 5 minutes, want quick value | README Tier 1 Quick Start (above) | Immediate uncommitted/unpushed warnings with 4 lines in settings.json |
| Setting up new project from scratch | BOOTSTRAP-NEW-PROJECT.md | Full interactive setup with preset selection and best practices |
| Setting up infrastructure for any project | SETUP-PROJECT.md | Unified tiered approach (5/15/30 min) for new or existing projects |
| Auditing existing Claude Code setup | AUDIT-EXISTING-PROJECT.md | Comprehensive compliance check against best practices |
| Learning the methodology | FOUNDATIONAL-PRINCIPLES.md | Read The Big 3 principles first |
| Finding a specific pattern | Pattern tables below | Jump directly to implementation guidance |
Not sure? Start with the Tier 1 Quick Start above, then explore SETUP-PROJECT.md when you want more.
Quick reference: Which pattern solves which problem?
Can't find what you need? See PATTERN-LEARNING-PATH.md for guided learning by role or TROUBLESHOOTING.md for common issues.
- SETUP-PROJECT.md - Unified tiered setup for any project (replaces separate new/existing prompts)
- CLAUDE.md.template - Project context template
- settings.json.template - Hook configuration
- session-start.sh - Session initialization script
Project-type configurations with appropriate defaults:
- coding.md - Software development (TDD, debugging, git workflow)
- writing.md - Content creation (voice consistency, citations)
- research.md - Analysis projects (evidence tiers, hypotheses)
- hybrid.md - Mixed-purpose projects
Core implementation patterns organized by the spec-driven development phase they support:
| Pattern | Key Insight | Source |
|---|---|---|
| Spec-Driven Development | 4-phase model: Specify→Plan→Tasks→Implement | GitHub Spec Kit |
| Framework Selection Guide | Choose orchestration: Native (default) vs GSD vs CAII | Synthesis |
| Pattern | Key Insight | Source |
|---|---|---|
| Context Engineering | Specs as deterministic context; correctness > compression | Nate B. Jones |
| Memory Architecture | 4-tier lifecycle model for information management | Nate B. Jones |
| Johari Window Ambiguity | Surface hidden assumptions before task execution | CAII |
| Pattern | Key Insight | Source |
|---|---|---|
| Documentation Maintenance | ARCH/PLAN/INDEX trio as spec artifacts | Production |
| Architecture Decision Records | Document why, not just what | Software Eng |
| Evidence Tiers | Dual tier system (A-D + 1-5) for claims | Production |
| Pattern | Key Insight | Source |
|---|---|---|
| Long-Running Agent | External artifacts as memory; one feature at a time | Anthropic |
| Progressive Disclosure | 3-tier architecture; 73% token savings | Production |
| Advanced Hooks | PreToolUse, PostToolUse, Stop hooks for quality gates | Production |
| Advanced Tool Use | Tool search, programmatic calling | Anthropic |
| Agentic Retrieval | Dynamic navigation vs pre-computed embeddings | LlamaIndex |
| Parallel Sessions | 5+ terminal + 5-10 web sessions for parallel work streams | Boris Cherny |
| AI Image Generation | Automated visual assets in development pipelines | Community |
| Pattern | Key Insight | Source |
|---|---|---|
| Agent Principles | 6 principles for production AI reliability | Nate B. Jones |
| Agent Evaluation | Evals as tests; task-based, LLM-as-judge, infrastructure noise | Anthropic |
| MCP Patterns | 7 failure modes + positive patterns + OWASP security | Nate B. Jones + OWASP |
| MCP vs Skills Economics | Skills 50% cheaper than MCP; tradeoffs on speed vs cost | Tenzir |
| Plugins and Extensions | When to use Skills vs MCP vs Hooks vs Commands | Production |
| Safety and Sandboxing | OS-level isolation over permission prompts | Anthropic + OWASP |
| GSD Orchestration | Fresh context per subagent; state externalization | glittercowboy |
| Cognitive Agent Infrastructure | 7 fixed cognitive agents vs domain-specific proliferation | CAII |
| Recursive Context Management | Programmatic self-examination vs single forward pass | MIT CSAIL |
| Session Learning | Capture corrections to update persistent config | Lance Martin |
| Confidence Scoring | HIGH/MEDIUM/LOW assessment framework | Production |
| Recursive Evolution | Self-Evolution Algorithm: multi-candidate, judge loop, crossover | Google TTD-DR |
| Tool Ecosystem | When Claude Code vs alternatives (Aider, Cursor, OpenHands) | Community |
Reusable AI behavior patterns:
- skills/README.md - Comprehensive skills guide
- skills/QUICK-REFERENCE.md - Fast skill lookup and integration patterns
- skills/SKILL-TEMPLATE.md - Template for new skills
- skills/SECURITY-GUIDELINES.md - Security framework with MITRE ATLAS mapping
- skills/examples/ - 10 production-validated example skills:
systematic-debugger- 4-phase debugging methodology (REPRODUCE-ISOLATE-UNDERSTAND-FIX)tdd-enforcer- Test-driven development enforcement (RED-GREEN-REFACTOR)git-workflow-helper- Git best practices and safe operationsultrathink-analyst- Deep analysis (FRAME-ANALYZE-SYNTHESIZE)recursive-analyst- Self-Evolution Algorithm (multi-candidate, judge loop, crossover)content-reviewer- Publication quality (evidence tiers, voice, balance)research-extractor- Systematic research synthesis (HIGH RISK - 5-layer defense)hypothesis-validator- Research hypothesis validation with confidence scoringthreat-model-reviewer- Security threat modeling (STRIDE)detection-rule-reviewer- SIEM/detection engineering quality
Complete .claude/ directories you can reference:
- examples/coding-project/ - Software development setup
- examples/writing-project/ - Content creation setup
- examples/research-project/ - Research and analysis setup
The SDD 4-phase model ensures clarity before code:
- Specify: What are we building and why?
- Plan: How will we build it?
- Tasks: What are the concrete steps?
- Implement: Execute with full context
From Anthropic's engineering blog:
"External artifacts become the agent's memory. Progress files, git history, and structured feature lists persist across sessions."
Specs, architecture docs, and task files bridge session boundaries.
- Simple bug fix: Skip to Tasks phase, brief spec in commit message
- Small feature (<1 day): Combine Specify+Plan, then implement
- Complex feature: Full 4-phase workflow with specs/
- Exploratory work: "Vibe code" first, retrofit specs if keeping
These patterns work across AI coding tools:
- Skills follow agentskills.io open standard (Claude, Codex, Cursor, etc.)
- SDD methodology applies to any AI coding agent
- Claude Code is our implementation context, not the only option
See DECISIONS.md for detailed reasoning on:
- Why prompts instead of template repos
- Why four presets instead of one or many
- Why AI-guided setup instead of scripts
- What to include vs. exclude
See SOURCES-QUICK-REFERENCE.md for top 20 Tier A/B sources or SOURCES.md for comprehensive database, including:
- Anthropic Engineering Blog posts
- Industry standards (GitHub Spec Kit, agentskills.io, OWASP MCP Guide)
- Production validation from real projects
Aligned standards:
- GitHub Spec Kit - 4-phase SDD model (59K+ stars)
- agentskills.io - Open standard for cross-platform skills
- OWASP MCP Security Guide - MCP security best practices
Foundational influences:
- Daniel Miessler's Fabric - Pattern structure and "scaffolding > models" philosophy
- Nate B. Jones's Memory Prompts - Context lifecycle management
- BMAD Method - Multi-agent architecture patterns
Contributions welcome! Please:
- Open an issue to discuss changes
- Follow existing patterns and style
- Update documentation as needed
MIT License - Use freely, attribution appreciated.
Built from patterns validated across 12+ production projects.