Type: Systematic literature review / research and analysis project
Demonstrates: Claude Code setup for research, hypothesis tracking, and evidence synthesis
✅ Stop hook - Warns about uncommitted research findings
- Critical for research: prevents losing analysis progress
- Custom message: "save your findings" emphasizes data preservation
- See
.claude/settings.jsonlines 8-17
✅ Pre-approved permissions - Research commands, git operations
python -m,jupyter(for data analysis)git status,git diff,git log- See
.claude/settings.jsonlines 2-7
✅ Minimal CLAUDE.md - 37 lines (target ~60)
- Evidence tiers (critical for research validity)
- Hypothesis tracking format (standardization after 5 violations)
- Research integrity violations to avoid (learned from mistakes)
- See
.claude/CLAUDE.md
✅ Evidence Tier System
- 4-tier classification (A/B/C/D) for all claims
- Strong claims require Tier A evidence only
- Enforces research rigor and source quality
✅ Hypothesis Tracking Standard
- Standardized format after 5 format violations
- Prevents inconsistent hypothesis documentation
- See template in CLAUDE.md
✅ Research Integrity Checklist
- Documents actual violations from this project
- Not generic guidelines - specific mistakes made
- 6 correlation/causation mix-ups, 4 language precision errors, 2 omitted contradictions
research-project/
.claude/
CLAUDE.md # Minimal context (37 lines)
settings.json # Hooks + permissions
README.md # This file (explains the example)
hypotheses/ # Hypothesis tracking
HYP-001-productivity.md
HYP-002-quality.md
README.md # Tracking template
sources/ # Source materials
analysis/ # Data synthesis
contradictions/ # Unresolved conflicts
BIBLIOGRAPHY.md # Complete source list with tiers
FINDINGS.md # Summary of results
Why in CLAUDE.md: Research validity depends on source quality
- Every claim needs tier classification
- Strong claims require Tier A only
- Claude repeatedly forgets this without reminder
Not just documentation: This is a quality gate, not a style guide.
What's included:
## Hypothesis Tracking Format
Each hypothesis file must include:
- Statement, rationale, confidence level (HIGH/MEDIUM/LOW)
- Supporting evidence (with tiers), contradicting evidence
- This format was violated 5 times → standardizeWhy: After 5 inconsistent hypothesis files, standardization was necessary.
Principle: Document standards only after they've been violated multiple times.
Not generic guidelines:
## Research Integrity Violations to Avoid
- Repeatedly mixed correlation/causation (6 instances in draft)
- Used "definitely" instead of "may indicate" (4 corrections needed)
- Omitted contradicting evidence (caught in peer review twice)These are actual mistakes from this project, not generic research ethics.
Project-specific rules:
- Sources must match BIBLIOGRAPHY.md entries (broke 3 citations)
- Tier A sources require DOI or permanent URL (2 became inaccessible)
- Expert quotes need date, context, consent flag
These are learned from errors, not preemptive documentation.
# In your research project
mkdir -p .claude
cp examples/research-project/.claude/CLAUDE.md .claude/
cp examples/research-project/.claude/settings.json .claude/
# Customize CLAUDE.md:
# 1. Replace evidence tiers if you use different system
# 2. Update hypothesis format to match your methodology
# 3. Replace integrity violations with YOUR actual mistakes-
Evidence Tiers:
- Adjust tier definitions for your field
- Academic research may have stricter criteria
- Industry research may use different validation
-
Hypothesis Format:
- Adapt to your research methodology
- Experimental design has different needs than literature review
- Include fields your analysis requires
-
Integrity Violations:
- Start with empty list
- Add violations as they occur
- Remove after 3+ sessions without recurrence
-
Commands:
- Add analysis tools you use (R, SPSS, Stata, etc.)
- Include dataset validation commands
- Pre-approve statistical analysis scripts
settings.json:
"permissions": {
"allow": [
"Bash(python -m pytest*)",
"Bash(python train.py*)",
"Bash(jupyter notebook*)",
"Bash(tensorboard*)",
"Bash(git lfs*)"
]
}CLAUDE.md additions:
## Dataset Requirements
- All datasets in data/ with README.md metadata
- Train/val/test splits documented with random seeds
- Data preprocessing steps logged in notebooks/preprocessing.ipynbEvidence tiers (different criteria):
## Evidence Tiers (Qualitative)
- **Tier A**: Primary sources, transcripts, original documents
- **Tier B**: Published analysis, expert interpretation
- **Tier C**: Secondary sources, summaries
- **Tier D**: Personal impressions, preliminary observationsCLAUDE.md additions:
## Known Gotchas
- Interview transcripts must anonymize participant IDs (P001, P002, not names)
- Code themes in codes/CODEBOOK.md before applying (changed 4 times mid-analysis)
- Each quote requires participant ID, date, and context sentenceHypothesis tracking (stricter):
## Hypothesis Format (Pre-registered)
Each hypothesis must include:
- H_N: Statement (exactly as pre-registered)
- Pre-registration ID and date (unchangeable)
- Planned analysis method
- Actual analysis method (if deviated, explain)
- Supporting/contradicting studies (with effect sizes)Check bibliography consistency before committing:
{
"matcher": "Bash(git commit*)",
"hooks": [{
"type": "command",
"command": "python scripts/validate_citations.py || (echo '⚠️ Citation errors detected'; exit 1)"
}]
}Show progress on session end:
{
"matcher": "",
"hooks": [{
"type": "command",
"command": "bash -c 'echo \"📊 Total hypotheses: $(ls hypotheses/HYP-*.md 2>/dev/null | wc -l | tr -d \" \")\"'"
}]
}Validate evidence tiers when writing findings:
{
"matcher": "Write(FINDINGS.md)",
"hooks": [{
"type": "command",
"command": "python scripts/check_evidence_tiers.py FINDINGS.md || echo '⚠️ Missing evidence tiers'"
}]
}After setting up:
- CLAUDE.md is under 60 lines (this example: 37 lines)
- Evidence tiers match your field's standards
- Hypothesis format reflects your methodology
- Integrity violations are YOUR actual mistakes, not generic advice
- Stop hook warns about uncommitted research
- Pre-approved commands match your analysis tools
- Citation/source requirements are project-specific
- Session start: Review uncommitted notes, recent commits
- Source extraction: Read papers, extract claims with evidence tiers
- Hypothesis updates: Add supporting/contradicting evidence
- Session end: Commit findings, push to backup
- Load context: CLAUDE.md reminds of evidence standards
- Synthesize: Identify patterns, conflicts, gaps
- Validate: Check claims against evidence tier requirements
- Document: Update FINDINGS.md with tiered references
- Review: Claude checks for integrity violations
- Cite: Verify all claims have appropriate evidence
- Contradict: Surface documented contradictions
- Finalize: Commit validated findings
Key: CLAUDE.md keeps research standards top-of-mind without requiring manual tracking.
- evidence-tiers.md - Dual tier system for claims
- confidence-scoring.md - HIGH/MEDIUM/LOW assessment
- context-engineering.md - External artifacts as memory
- FOUNDATIONAL-PRINCIPLES.md - The Big 3
- This is a reference example, not a real research project
- No actual research data included (focus on .claude/ structure)
- Customize evidence tiers for your field
- Research integrity violations should reflect YOUR mistakes
- Adapt hypothesis format to your methodology
Last Updated: February 2026