Skip to content

Commit f5ce583

Browse files
committed
docs: add accurate PR description for new prompt creation
- Emphasize that generate-codebase-context is NEW (not just enhanced) - Detail all new files and research documents added - Explain why this prompt was needed - Clarify impact on workflow (optional but recommended) - Provide clear usage instructions and review focus areas
1 parent 5a8cedf commit f5ce583

File tree

1 file changed

+280
-0
lines changed

1 file changed

+280
-0
lines changed

docs/PR_DESCRIPTION.md

Lines changed: 280 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,280 @@
1+
# PR: Add codebase context generation with evidence-based analysis
2+
3+
## Summary
4+
5+
Creates a new `generate-codebase-context` prompt with comprehensive research-driven analysis capabilities. This prompt provides evidence-based codebase analysis with confidence assessment, supporting spec-driven feature development.
6+
7+
## What's New in This PR
8+
9+
### 1. New Prompt: `generate-codebase-context`
10+
11+
**File:** `prompts/generate-codebase-context.md` (877 lines)
12+
13+
A comprehensive prompt for analyzing codebases before feature development, incorporating battle-tested patterns from Claude Code and research best practices.
14+
15+
**Core Capabilities:**
16+
- **6-Phase Analysis Process:**
17+
1. Repository structure detection
18+
2. Documentation audit with rationale extraction
19+
3. Code analysis (WHAT + HOW)
20+
4. Integration points mapping
21+
5. Gap identification
22+
6. Evidence-based documentation generation
23+
24+
- **Evidence Citation Standards:**
25+
- Code findings: `path/to/file.ts:45-67` (with line ranges)
26+
- Documentation findings: `path/to/doc.md#section-heading` (with anchors)
27+
- User input: `[User confirmed: YYYY-MM-DD]` (dated quotes)
28+
29+
- **Confidence Assessment:**
30+
- 🟢 High: Strong evidence from working code or explicit docs
31+
- 🟡 Medium: Inferred from context, feature flags, or implied
32+
- 🔴 Low: Cannot determine, conflicts, or unknowns
33+
34+
- **Key Features:**
35+
- Execution path tracing (step-by-step flows)
36+
- Essential files list (5-10 priority files with line ranges)
37+
- Interactive short questions (not batch questionnaires)
38+
- Separation of WHAT/HOW (code) vs WHY (docs) vs Intent (user)
39+
- Comprehensive example output structure
40+
- Quality checklist before completion
41+
42+
**Why This Prompt?**
43+
Before this PR, we had no systematic way to analyze codebases before feature development. This prompt fills that critical gap by providing structured, evidence-based context that informs all subsequent spec-driven development steps.
44+
45+
### 2. Comprehensive Research Analysis 📚
46+
47+
**New Research Documents:**
48+
49+
- **`docs/research/reverse-engineer-prompts/claude-code-feature-dev-comparison.md`** (18,287 words)
50+
- Complete analysis of Claude Code's 7-phase feature-dev workflow
51+
- Agent specifications (code-explorer, code-architect, code-reviewer)
52+
- Gap analysis comparing our workflow to Claude Code's
53+
- Implementation roadmap with 3 phases
54+
55+
- **`docs/research/reverse-engineer-prompts/research-synthesis.md`** (8,000+ words)
56+
- Integration of Claude Code analysis + existing research patterns
57+
- Actionable recommendations with priority matrix
58+
- Specific enhancements for each prompt
59+
- Success metrics and implementation checklist
60+
61+
- **`docs/research/reverse-engineer-prompts/README.md`**
62+
- Overview of all research documents
63+
- How research was applied to this PR
64+
- Key insights and success metrics
65+
66+
**Cataloged Existing Research:**
67+
- `code-analyst.md` - Pattern for extracting WHAT/HOW from code
68+
- `information-analyst.md` - Pattern for extracting WHY from documentation
69+
- `context_bootstrap.md` - Manager orchestration pattern
70+
71+
### 3. Progress Tracking & Roadmap 🗺️
72+
73+
**`docs/PROGRESS.md`** - Complete implementation tracking:
74+
- Phase 1 (This PR): New codebase-context prompt ✅
75+
- Phase 2 (Next PR): Enhance spec, add architecture-options, add review-implementation
76+
- Phase 3 (Future): Examples, tutorials, polish
77+
- Success metrics for each phase
78+
- Key decisions documented
79+
80+
## Changes by File
81+
82+
### New Files
83+
```
84+
prompts/generate-codebase-context.md (877 lines)
85+
docs/research/reverse-engineer-prompts/claude-code-feature-dev-comparison.md
86+
docs/research/reverse-engineer-prompts/research-synthesis.md
87+
docs/research/reverse-engineer-prompts/README.md
88+
docs/PROGRESS.md
89+
```
90+
91+
### Existing Files (Cataloged)
92+
```
93+
docs/research/reverse-engineer-prompts/code-analyst.md
94+
docs/research/reverse-engineer-prompts/information-analyst.md
95+
docs/research/reverse-engineer-prompts/context_bootstrap.md
96+
```
97+
98+
## Research Foundation
99+
100+
This prompt is based on proven patterns from:
101+
102+
1. **Claude Code feature-dev plugin**
103+
- Production-tested 7-phase workflow
104+
- Specialized agents (code-explorer, code-architect, code-reviewer)
105+
- Evidence-based analysis approach
106+
- Mandatory user checkpoints
107+
108+
2. **Existing research patterns**
109+
- code-analyst: WHAT/HOW from code analysis
110+
- information-analyst: WHY from documentation
111+
- context_bootstrap: Manager orchestration
112+
113+
3. **Best practices**
114+
- Evidence citations for traceability
115+
- Confidence levels to distinguish facts from inferences
116+
- Interactive questioning for better engagement
117+
- Phased analysis for thoroughness
118+
119+
## Key Principles Implemented
120+
121+
1. **Evidence-Based:** Every finding requires file:line or path#heading citation
122+
2. **Confidence Assessment:** All findings categorized as High/Medium/Low
123+
3. **Separation of Concerns:** Code (WHAT/HOW) vs Docs (WHY) vs User (Intent)
124+
4. **Stay in Lane:** Don't infer WHY from code - flag as gap for user
125+
5. **Interactive Not Batch:** Short focused questions (3-5 max per round)
126+
6. **Flag Gaps Explicitly:** Better to document "Unknown" than guess
127+
7. **Actionable Outputs:** Specific file lists, execution traces, clear recommendations
128+
129+
## Example Output
130+
131+
The prompt generates comprehensive analysis documents like:
132+
133+
```markdown
134+
# Codebase Context: [Project Name]
135+
136+
## 1. Repository Overview
137+
- Type, components, organization with evidence
138+
139+
## 2. Documentation Inventory
140+
- Found docs with timestamps
141+
- Extracted rationale with source citations
142+
- Conflicts and gaps flagged
143+
144+
## 3. System Capabilities (WHAT)
145+
🟢 High Confidence Features (with file:line evidence)
146+
🟡 Medium Confidence (feature toggles, experimental)
147+
🔴 Low Confidence (dead code, unknowns)
148+
149+
## 4. Architecture (HOW)
150+
- Components with responsibilities and evidence
151+
- Communication patterns with file:line refs
152+
- Architectural patterns with examples
153+
154+
## 8. Essential Files to Read
155+
1. src/api/routes/index.ts:12-89 - Main route definitions
156+
2. src/services/UserService.ts:45-234 - Core user logic
157+
...
158+
159+
## 9. Execution Path Examples
160+
User Login Flow:
161+
1. POST /api/auth/login → src/api/routes/auth.ts:23
162+
2. AuthController.login() → src/controllers/AuthController.ts:45
163+
...
164+
165+
## 10. Confidence Summary
166+
High Confidence: [list with evidence]
167+
Medium Confidence: [list needing validation]
168+
Low Confidence: [unknowns]
169+
```
170+
171+
## Testing
172+
173+
- ✅ Prompt YAML frontmatter validated with prompt loader
174+
- ✅ Example output structure verified
175+
- ✅ Evidence citation format tested
176+
- ✅ Confidence assessment categories validated
177+
- ✅ Documentation completeness reviewed
178+
179+
## Breaking Changes
180+
181+
None - this is purely additive.
182+
183+
## Impact on Existing Workflow
184+
185+
### Before This PR
186+
```
187+
1. generate-spec → Create specification
188+
2. generate-task-list-from-spec → Break into tasks
189+
3. manage-tasks → Execute
190+
```
191+
192+
### After This PR
193+
```
194+
1. generate-codebase-context → Analyze codebase (NEW)
195+
196+
2. generate-spec → Create specification (can reference context)
197+
3. generate-task-list-from-spec → Break into tasks
198+
4. manage-tasks → Execute
199+
```
200+
201+
The new prompt is **optional but recommended** - it provides valuable context for better spec generation.
202+
203+
## Future Enhancements (Not in This PR)
204+
205+
Documented in `docs/PROGRESS.md` for future PRs:
206+
207+
### Phase 2 (Next PR)
208+
- Enhance `generate-spec` with mandatory clarifying phase
209+
- Create `generate-architecture-options` prompt (NEW)
210+
- Create `review-implementation` prompt (NEW)
211+
- Update workflow documentation
212+
- Create ADR template
213+
214+
### Phase 3 (Future PR)
215+
- Complete example walkthroughs
216+
- Best practices guide
217+
- Troubleshooting documentation
218+
219+
## Success Metrics (Phase 1)
220+
221+
- ✅ Evidence citations in 100% of code findings
222+
- ✅ Confidence levels marked for all findings
223+
- ✅ Documentation audit phase included
224+
- ✅ Interactive questioning approach (3-5 questions per round)
225+
- ✅ Essential files list structure (5-10 files with line ranges)
226+
- ✅ Execution path traces in examples
227+
- ✅ Complete roadmap for Phase 2 and 3
228+
229+
## How to Use
230+
231+
Once merged, users can invoke the prompt:
232+
233+
```python
234+
# Via MCP client
235+
{
236+
"method": "prompts/get",
237+
"params": {
238+
"name": "generate-codebase-context"
239+
}
240+
}
241+
```
242+
243+
The prompt will guide through a 6-phase interactive analysis, producing an evidence-based codebase context document in `/tasks/[n]-context-[name].md`.
244+
245+
## Review Focus Areas
246+
247+
1. **Prompt Quality:** Does the `generate-codebase-context` prompt provide clear, actionable guidance?
248+
2. **Research Depth:** Is the research analysis comprehensive and well-documented?
249+
3. **Evidence Standards:** Are the citation formats clear and consistent?
250+
4. **Confidence Assessment:** Are the confidence levels well-defined?
251+
5. **Example Output:** Does the example structure make sense?
252+
6. **Future Roadmap:** Is the Phase 2/3 plan clear and actionable?
253+
254+
## Related Issues
255+
256+
This PR addresses findings from internal research showing:
257+
- ❌ Gap: No systematic codebase analysis before feature development
258+
- ❌ Gap: No evidence citation standards
259+
- ❌ Gap: No confidence assessment for findings
260+
- ❌ Gap: Batch questionnaires instead of interactive dialog
261+
262+
All addressed in this PR.
263+
264+
## Checklist
265+
266+
- [x] New prompt created with comprehensive examples
267+
- [x] Prompt YAML frontmatter validated
268+
- [x] Research analysis complete and documented
269+
- [x] Progress tracking established
270+
- [x] Future roadmap defined
271+
- [x] Commit messages follow conventional commits
272+
- [x] All commits are focused and well-documented
273+
- [ ] PR review approved
274+
- [ ] Tests passing (if applicable)
275+
276+
---
277+
278+
**Created by:** Research-driven development based on Claude Code analysis
279+
**Documentation:** See `docs/PROGRESS.md` for complete implementation plan
280+
**Next Steps:** Phase 2 PR will enhance spec generation and add architecture/review prompts

0 commit comments

Comments
 (0)