-
Notifications
You must be signed in to change notification settings - Fork 0
Closed
Description
Context
The pr-review plugin ships with a convention-based postmortem knowledge base (.claude/knowledge/postmortems/). Currently each repo populates its own KB manually via /pr-review:postmortem-extract.
Opportunity
Algolia's Confluence space contains postmortem reports across all products. Mining these systematically would:
- Bootstrap any repo's KB — new repos get cross-org learnings on day one
- Surface cross-product patterns — an incident in Search infra may prevent one in Agent Studio
- Scale institutional memory — learnings from 50+ incidents, not just 5
Proposed approach
- Audit Confluence postmortem space (identify structure, volume, access)
- Build extraction pipeline: Confluence → structured postmortem format (anti-patterns, search strategies, safe patterns)
- Deduplicate and categorize by domain (infra, data, API, multi-tenancy, etc.)
- Ship as optional
knowledge/postmortems/bundle in this plugin, or as a separate installable knowledge pack - Automate: new Confluence postmortems trigger extraction PR via GitHub Actions
Open questions
- Access: who owns the Confluence postmortem space? Do we need specific permissions?
- Volume: how many postmortems exist? What's the signal-to-noise ratio?
- Privacy: any postmortems with sensitive customer data that can't be extracted?
/cc @pzaragoza93
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels