diff --git a/.github/ISSUE_TEMPLATE/case.md b/.github/ISSUE_TEMPLATE/case.md index 400b920..07bad8a 100644 --- a/.github/ISSUE_TEMPLATE/case.md +++ b/.github/ISSUE_TEMPLATE/case.md @@ -1,170 +1,171 @@ --- -name: Case -about: Product-level description in stakeholder language - defines system boundaries and value propositions +name: Case Brief +about: Product-level overview for stakeholders - defines engineered system and acceptance criteria title: '[CASE] [Brief descriptive title]' labels: 'case' assignees: '' - --- -[Case writing rules: +[Case Brief writing rules: - Product Focus: Define agent value and experience, not technical implementation +- Problem Analysis: Lives in Case Product Specification - Brief references Spec for context - Agent Priority: List agents by % importance with Human/System distinction (even if similar to other cases) -- Target System Clarity: Explicitly identify target system and distinguish from consumer/dependency systems -- System Boundaries: Explicitly state what's included/excluded from this case - Basic English: Write for non-native speakers, avoid complex technical terms -- Scope Limit: Target achievable milestones of 3–6 months only -- Agent-Driven: Focus on agent behaviour and adoption, rather than system performance] - -## Target System - -[Which system does this case address and what is its position in the larger system architecture?] - -## Problem Statement - -[Describe the agent/business problem this case solves. What needs to change in the current state and why? Focus on WHAT agents need and WHY it matters. Leave technical details for `hypotheses` of experiments] - -### Current Agent Experience +- Stakeholder Language: This brief is for business/product stakeholders +- Minimal Content: Engineers need context to understand "what" and "why", not extensive product analysis +- System Boundaries: Explicitly state what's included/excluded +- Link to Details: Extended analysis lives in Coda, link from here +- Scenario-Driven: Focus on agent behavior and acceptance, not system performance +- Scope Limit: Target 3-6 month achievable milestones +- Experiment Mapping: Link acceptance criteria to implementing experiments +- Metrics Cascade: Case Product Specification defines product success metrics → Case Brief translates to verifiable acceptance criteria → Experiments validate technical implementation +- Link Don't Duplicate: Reference specifications, don't copy content] -[What agents experience today that needs improvement] +## Engineered System -### Desired Agent Experience +[Which specific system/subsystem are we building and what is its position in the larger system architecture?] -[What agents should be able to do after this case is complete] +*For detailed system context, complete problem analysis including current/desired agent experience, see: [Case Product Specification](link-to-coda)* -## Value Proposition +## Agent Priority Overview -[Clear business / agent value that this case provides] +[High-level stakeholder priorities with percentages. Detailed analysis in Case Product Specification in Coda.] -## Agent Analysis +**Priority Distribution:** [e.g., "Primary: 60% Developers; Secondary: 30% System Integrators; Minimal: 10% End Users"] -[Map all agents (human and system) by priority with percentages. Focus on WHO / WHAT will interact with or benefit from the Target System] -**Agent Priority Overview**: [e.g., "Primary: 60% Developers; Secondary: 30% Monitoring Systems; Minimal: 10% End Users"] -[Optional: Include an evaluation / justification for why these priorities make sense for this case] +**Rationale:** *Optional* [1-2 sentence justification for these priorities] -### [Primary Agent Name] ([X%] - Primary) -- **Agent Type**: [Human Agent: role / persona] OR [System Agent: system type / function] -- **Current Pain Points**: [What problems do they have today with existing systems] -- **Desired Outcomes**: [What success looks like for them] -- **Agent Journey**: [Action] → [Action] → [Successful Outcome] -- **Success Metrics**: [How they measure success - can include system metrics for System Agents] -- **Integration Requirements**: [For System Agents: APIs, data formats, protocols needed] - -### [Secondary Agent Name] ([Y%] - Secondary) - -[Same structure as above] - -[Continue the pattern for all Agents, ordered by priority] +*For detailed agent analysis with agent journeys and integration requirements, see: [Case Product Specification](link-to-coda)* ## Expected Agent Experience & Acceptance -[BDD scenarios that define both the Target System behaviour and the acceptance criteria. Describe what agents will experience, NOT how the Target System works internally. Focus on acceptance testing, not repeating the desired outcomes already listed in Agent Analysis. Validation priorities are derived from Agent Priority Overview above – no separate priority statement needed here] +[Define scenarios the Engineered System must handle and acceptance criteria. Focus on observable outcomes, not internal system operations.] + +*Note: Link acceptance criteria to implementing experiments during experiment planning phase.* ### Agent Acceptance Scenarios -**Scenario 1: [Primary Happy Path for Human Agent]** -- Given [agent context / starting point] + +**Scenario 1: [Primary Scenario for Primary Agent]** +- Given [detailed initial conditions] - When [agent performs action] - Then [agent experiences result] - And [additional agent benefit] **Acceptance Criteria:** +[Each criterion should be demonstrable within 5-10 minutes by non-developers or through developer demo. Validation methods: Observable (UI/logs/behavior), Measurable (counted/timed), Testable (test scripts), User-Validated (actual users)] -[It would be preferable if non-developers can verify this work in 5-10 minutes] -- [ ] [Specific measurable criterion for this scenario] -- [ ] [Performance / quality requirement for this behavior] +- [ ] [Specific criterion] → **Experiment**: [Link #XXX when available or TBD] + - *Validation: [How to verify - e.g., "Dashboard shows metric within target"]* +- [ ] [Performance/quality requirement] → **Experiment**: [Link #XXX when available or TBD] + - *Validation: [Verification method]* -**Scenario 2: [Primary Happy Path for System Agent]** - -- Given [system agent needs specific data / functionality] -- When [system agent makes API call / integration request] -- Then [target system provides required response / data] -- And [system agent can successfully complete its function] +**Scenario 2: [Secondary Scenario - Success path for Secondary Agent]** +- Given [different initial conditions] +- When [alternative agent action] +- Then [expected alternative outcome] **Acceptance Criteria:** +- [ ] [Specific criterion] → **Experiment**: [Link #XXX when available or TBD] + - *Validation: [Verification method]* -[How to verify system agent integration works, e.g. API tests, data format checks] +**Scenario 3: [Alternative Scenario - Different approach or edge case]** +- Given [edge case conditions] +- When [action that triggers alternative path] +- Then [expected handling] -**Scenario 3: [Alternative Path]** +**Acceptance Criteria:** +- [ ] [Specific criterion] → **Experiment**: [Link #XXX when available or TBD] + - *Validation: [Verification method]* -Given [Different initial conditions] -When [Alternative stakeholder action] -Then [Expected alternative response] +**Scenario 4: [Error Scenario - Failure case and recovery]** +- Given [error conditions] +- When [action that triggers error] +- Then [expected error handling and recovery] **Acceptance Criteria:** +- [ ] [Error handling criterion] → **Experiment**: [Link #XXX when available or TBD] + - *Validation: [Verification method]* -- [ ] [Specific measurable criterion for this scenario] +## Scope Summary -**Scenario 4: [Error/Edge Case Handling]** +### Engineered System Scope -Given [Error conditions] -When [Action that triggers error] -Then [Expected error handling behavior] +**In Scope:** [What this system will do - boundaries included] -**Acceptance Criteria:** +**Out of Scope:** [What explicitly will not be addressed - link to other cases handling these] -- [ ] [Specific measurable criterion for error handling] +*For detailed interfaces and integration points, see: [Case Architecture Specification](link-to-arch-doc)* -## System Context & Boundaries +## Critical Dependencies & Blockers -### Target System Scope +**Blocking This Case:** +- [Case/System #X]: [What must complete before we can proceed] -In Scope: [What the Target System will do and the boundaries included] -Out of Scope: [What explicitly will not be addressed] -Interfaces: [External systems (consumer systems, dependency systems, peer / interacting systems), data flows, and dependencies] +**This Case Blocks:** +- [Case/System #Y]: [What depends on this case's completion] -### Quality Attributes +**Bottlenecks** (Resource constraints): +- [Resource constraint] - Impact: [Description] -Performance: [Response time, throughput requirements] -Scalability: [Growth expectations and constraints] -Reliability: [Uptime, error rate expectations] -Security: [Security requirements and compliance needs] -Usability: [User experience requirements] +**External Blockers** (Third-party dependencies): +- [Third-party dependency] - Expected resolution: [Timeline] -### Constraints & Dependencies: +**Critical Path Items:** +- [Dependency with resolution date] +- [Risk requiring immediate attention] -Dependencies: [External dependencies and other cases / systems this depends on] -Technical Constraints: [Technical limitations and requirements] -Business Constraints: [Business rules, resource, and timeline constraints] +*For complete dependency analysis and technical interfaces, see: [Case Product Specification](link-to-coda) and [Case Architecture Specification](link-to-arch-doc)* -## Risks Assessment +## Decision Log -**[Risk 1]** +[Enumeration of related ADRs - decisions themselves live in ADR documents] -Impact: [High / Med / Low] -Probability: [High / Med / Low] -Mitigation Strategy: [Mitigation approach] -Owner: [Responsible person] +- [Date] - ADR #[XXXX] - [Case decomposition decision] - [Link to ADR] + Status: [Active/Superseded by ADR #[XXXX]] +- [Date] - ADR #[XXXX] - [Brief description] - [Link to ADR] + Status: [Active/Superseded by ADR #[XXXX]] -## Decision Log +## References & Links -[Record key architectural and design decisions] +**Full Case Details:** +- [Case Product Specification](link-to-coda) - Extended product analysis, detailed agent journeys, business context -[Date] - [Decision] - [Rationale] - [Impact on agents] -Status: [Active/Superseded] +**Related Architecture:** +- [Case Architecture Specification](link-to-arch-doc) - Technical architecture, interfaces, integration points ## Learning Outcomes -[To be filled in during and after the case has been completed] +[To be filled during and after case completion] **What we learned:** - -Key insights gained: -Assumptions validated/invalidated: -Unexpected discoveries: +- Key insights gained: +- Assumptions validated/invalidated: +- Unexpected discoveries: **What we would do differently:** +- Process improvements: +- Technical approach changes: + +## Review & Acknowledgment + +[People who should review and acknowledge understanding of this experiment] -Process improvements: -Technical approach changes: +- [ ] [Person 1] +- [ ] [Person 2] +- [ ] [Person 3] +*Note: Check your name after reading and understanding this case to confirm awareness and reduce communication overhead.* + +--- -[ **Final Checklist Before Submitting:** -- [ ] Does this describe Agent value, not technical implementation? -- [ ] Are agents prioritized with clear percentages and Human / System distinction? -- [ ] Is the Target System clearly identified and distinguished from consumer / dependency systems? -- [ ] Are system boundaries clearly defined? -- [ ] Is the language simple enough for non-native speakers? -- [ ] Is the scope limited to 3-6 months of achievable work? -- [ ] Do scenarios focus on agent behavior, not system performance? -] +- [ ] Does this describe agent value, not technical implementation? +- [ ] Is problem analysis referenced (not duplicated) from Case Product Specification? +- [ ] Is Agent Priority Overview high-level with justification? +- [ ] Are acceptance criteria clear and verifiable? +- [ ] Do scenarios use correct terminology (Primary/Secondary/Alternative/Error)? +- [ ] Is scope limited to 3-6 months of achievable work? +- [ ] Are only critical dependencies and blockers listed? +- [ ] Are links to Case Product Specification and Architecture docs present? +- [ ] Are experiment links marked as TBD where not yet planned? +- [ ] Is Review & Acknowledgment section complete? \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/experiment.md b/.github/ISSUE_TEMPLATE/experiment.md index aac2641..06d412f 100644 --- a/.github/ISSUE_TEMPLATE/experiment.md +++ b/.github/ISSUE_TEMPLATE/experiment.md @@ -1,152 +1,226 @@ --- -name: Experiment -about: Engineering-level testable statement that validates part of a case +name: Implementation Experiment +about: Hypothesis-driven technical implementation - validates specific technical approach title: '[EXPERIMENT] [Brief descriptive title]' labels: 'experiment' assignees: '' - --- -## Hypothesis Statement +[Experiment writing rules: +- Hypothesis-Driven: Clear technical assumption being tested +- Lightweight: Minimal verification - focus on validating hypothesis +- Engineer Freedom: Choose verification approach that fits hypothesis +- Run Fast: Code may be thrown away if hypothesis fails +- Technical Focus: Success criteria are technical, not product/user outcomes. Product criteria live in Case Brief +- Link, Don't Copy: Reference Case Brief, Architecture docs - don't duplicate] + +## Experiment Type & Hypothesis + +**Type:** [Implementation / Research / Analysis / Proof-of-Concept] + +**What we believe:** [Technical approach or assumption we're testing] + +**Expected outcome:** [Measurable technical result we expect] + +**How we'll verify:** [Brief verification approach - detailed in Success Criteria below, expand in Verification Approach if non-standard] + +## Implementation Scope -We believe that [what we believe to be true] -Will result in [expected outcome] -As evidenced by [measurable signal] +[What we're building/testing in 1-2 sentences - keep brief and specific] -## System Context +**In Scope:** +- [Specific technical work included] +- [Component/feature being implemented] -System Level: [Module / Interface / Integration / End-to-End] -Component: [Specific system component being tested] -Architecture Layer: [Presentation / Business / Data / Infrastructure] +**Out of Scope:** +- [What's explicitly not included - link to other experiments if applicable] -## Detailed Description +## Technical Approach *Optional* -[Provide engineering-level detail about what specific aspect of the system this hypothesis tests] +[Optional - use if you need to document specific implementation approach, tech choices, or setup] -**Technical Scope:** +**Implementation Notes:** +[Key technical decisions or approach details] -[What technical aspects are being validated] -[Which system components are involved] -[What interfaces or integrations are tested] +**Technology Stack:** [If relevant to hypothesis] +- [Technology/tool/Library] - [Optional - brief reason. Detailed "why" in ADRs if architectural decision] -## Experimental Design +## Engineered System Context *Optional* -### Setup Requirements +[Optional - use if experiment context isn't obvious from Case Brief and Architecture docs] -**Environment:** +**Engineered System Focus:** [Which part of the system this experiment addresses] -[Development / Testing / Production-like environment needs] -[Specific configuration requirements] +**Component:** [Specific component being modified/created/tested] -**Data Requirements:** +**Architecture Layer:** [Where this fits - e.g., Presentation / Business Logic / Data / Infrastructure] -[Test data needed] -[Data volume and characteristics] +**Integration Points:** [Only if relevant to this experiment] +- [Interface/API this experiment provides or consumes] +- [System this experiment integrates with] -**Tool Requirements:** +*Example Integration Points:* +- *REST API endpoint: POST /api/contracts/deploy* +- *WebSocket connection for real-time updates* +- *Integration with external payment service via HTTPS* -[Measurement and monitoring tools] -[Testing frameworks and utilities] +*For complete integration architecture, see: [Architecture Documentation](link)* -### Test Methodology +**Critical Path:** -Approach: [Controlled experiment / A-B test / Spike / Prototype / etc.] +*Impediments* (Active obstacles): +- [Active obstacle preventing work] - Status: [Active/Resolved] -**Steps:** +*Bottlenecks* (Resource constraints): +- [Resource constraint slowing progress] -1. [Detailed step with expected outcome] -2. [Detailed step with expected outcome] -3. [Detailed step with expected outcome] +*External Blockers* (Third-party dependencies): +- [Dependency causing delays] - Expected resolution: [Timeline] -**Variables:** +*Blocking this experiment:* +- [Experiment/System #X must complete first] -Independent Variables: [What we're changing] -Dependent Variables: [What we're measuring] -Control Variables: [What we're keeping constant] +## Outcomes -## Expected Outcomes & Validation +[Checkbox list - when all checked, experiment is ready to close] -**Expected Results:** +**Code/Artifacts:** +- [ ] [Specific code module/component committed to branch X] +- [ ] [Configuration file updated] +- [ ] [Database migration script created] +- [ ] [API endpoint implemented] +- [ ] [Test suite added] -- Key metric 1: [Expected range / value] -- Key metric 2: [Expected range / value] +**Documentation:** +- [ ] [Technical specification document created] +- [ ] [API documentation updated] +- [ ] [Architecture diagram added to wiki] -**Validation Criteria:** +**Analysis/Research:** +- [ ] [Performance analysis report completed] +- [ ] [Technology comparison matrix documented] +- [ ] [Proof-of-concept demo recorded] -- [ ] Hypothesis Confirmed If: [Specific measurable criterion] -- [ ] Hypothesis Rejected If: [Specific measurable criterion] -- [ ] Inconclusive If: [Conditions requiring further investigation] +**Required Completion Items:** +- [ ] Hypothesis outcome documented in Results section (Confirmed/Rejected/Inconclusive) +- [ ] Key learnings captured in Results section +- [ ] Impact on parent case documented in Results section +- [ ] Next steps identified based on outcome -## Resources & Constraints +## Success Criteria & Metrics -**Required Resources:** +[Focus on technical criteria - product criteria are in Case Brief] -Human: [Roles needed and time commitment] -Technical: [Computing resources, environments, tools, licenses] -Timeline: [Estimated duration for setup, execution, analysis] +**Hypothesis Confirmed If:** +- [ ] [Specific measurable technical criterion with threshold] +- [ ] [Performance metric with acceptable range] +- [ ] [Quality/capability demonstrated] -**Risks & Mitigation:** +**Hypothesis Rejected If:** +- [ ] [Specific technical failure condition] +- [ ] [Performance below minimum threshold] +- [ ] [Critical quality requirement not met] -**[Risk 1]** +**Inconclusive If:** +- [ ] [Conditions requiring further investigation] +- [ ] [Mixed results needing additional experiments] -System Impact: [System impact] -Probability: [High / Med / Low] -Mitigation Strategy: [Prevention] -Rollback plan: [Recovery] +## Verification Approach *Optional* -## Results +[Optional - Document only if you're NOT doing standard code review + basic testing] -[To be filled after experiment completion] +**Standard approach assumed:** Code review + linter + basic testing to verify hypothesis -**Data Collected:** +**Special verification for this experiment:** +- [Document only non-standard verification needs, e.g.: + - Load testing with specific parameters + - Security review due to sensitive operations + - Integration testing with specific external system] -[Actual measurements and observations] +## Resources & Timeline -**Analysis:** +**Team:** +- [Role/Person] - [Time commitment] -[Statistical analysis, trend analysis] +**Timeline:** [Estimated duration for experiment] -**Conclusion:** +## Decision Log -[Hypothesis confirmed / rejected / inconclusive] -[Confidence level in results] +[Enumeration of related ADRs - decisions themselves live in ADR documents] -## Learnings and Insights +- [Date] - ADR #[XXXX] - [Case decomposition decision] - [Link to ADR] + Status: [Active/Superseded by ADR #[XXXX]] +- [Date] - ADR #[XXXX] - [Brief description] - [Link to ADR] + Status: [Active/Superseded by ADR #[XXXX]] -[To be filled after experiment completion] +## References & Links -**Technical Learnings:** +**Case Brief:** [Link] - See acceptance criteria for product context -[What we learned about the system] -[Unexpected technical discoveries] +**Full Case Details:** +- [Case Product Specification](link-to-coda) - Extended product analysis, detailed agent journeys, business context -**Process Learnings:** +**Architecture Docs:** [Link if exists] - Technical architecture and design decisions -[What we learned about our experimental approach] -[Improvements for future hypotheses] +**Additional Resources:** [Optional] +- [Tool documentation] +- [Best practices guide] +- [Research paper] +- [Meeting notes] +- [Related experiments] -## Impact on Parent Case +## Results & Learning -[How these results affect the parent case and its acceptance criteria] +[Fill after experiment completion] -**Case Progression:** +**What Happened:** +[Actual results compared to expected outcome] -[How this moves the case forward] -[What case assumptions were validated / invalidated] +**Hypothesis Outcome:** [Confirmed / Rejected / Inconclusive] -## Next Steps +**Confidence Level:** [High / Medium / Low] -**If Hypothesis Confirmed:** +**Key Insights:** +- [What we learned about the technical approach] +- [Unexpected technical discoveries] -- [ ] [Specific next actions] -- [ ] [Additional hypotheses to test] +**Impact on Parent Case:** +- [How these results affect parent case acceptance criteria] +- [What case assumptions were validated/invalidated] -**If Hypothesis Rejected:** +**Next Steps:** -- [ ] [Alternative approaches to investigate] -- [ ] [Case pivot considerations] +*If Confirmed:* +- [ ] [Specific next action] +- [ ] [Next experiment to validate] -**If Inconclusive:** +*If Rejected:* +- [ ] [Alternative approach to investigate] +- [ ] [Case pivot consideration] + +*If Inconclusive:* +- [ ] [Additional investigation needed] +- [ ] [Refinement to experiment design] + +## Review & Acknowledgment + +[People who should review and acknowledge understanding of this experiment] + +- [ ] [Person 1] +- [ ] [Person 2] +- [ ] [Person 3] + +*Note: Check your name after reading and understanding this case to confirm awareness and reduce communication overhead.* + +--- -- [ ] [Additional experiments needed] -- [ ] [Refinements to experimental design] +**Final Checklist Before Submitting:** +- [ ] Is hypothesis clear and testable? +- [ ] Are success criteria measurable and technical (not product-focused)? +- [ ] Is scope limited and specific to this experiment? +- [ ] Are links to Case Brief and Architecture docs present? +- [ ] Are only critical dependencies listed? +- [ ] Is verification approach appropriate (standard or documented special needs)? +- [ ] Is this lightweight enough for experimental approach (not production feature development)? +- [ ] Does Decision Log enumerate relevant ADRs? +- [ ] Are Outcomes specific and actionable? \ No newline at end of file diff --git a/README.md b/README.md index b5f13c4..3b73641 100644 --- a/README.md +++ b/README.md @@ -4,16 +4,16 @@ The repository serves as the hub for accumulating and managing project knowledge By leveraging **GitHub Projects** we enable real-time tracking of progress and workflows. -### Key entities in the Project +### Key Entities in the Project - **Cases** - Cases define high-level R&D goals and priorities. They are a specific type of GitHub Issue created and modified solely through stakeholder decisions. Cases serve as strategic drivers, guiding development and documenting the outcomes of experiments conducted under their scope. + Cases are product-focused GitHub Issues that define user value propositions and agent experiences. They are created and modified through stakeholder decisions, serving as strategic drivers that map agent outcomes to experiments and coordinate system-wide development efforts. - **Experiments** - Experiments are sub-issues linked to Cases, representing focused initiatives or hypotheses to be tested. Each Experiment is assigned an owner responsible for leading the effort and producing results. Upon completion, Experiments result in a pull request (PR) to this repository to ensure that findings are documented, reviewed, and integrated. + Experiments are technical GitHub Issues (sub-issues of Cases) representing focused technical hypotheses to be tested through implementation and validation. Each Experiment is assigned an owner responsible for leading both the technical implementation and validation phases, with results documented through PRs to ensure findings are integrated into the knowledge repository. - **Tasks** - Tasks are sub-issues within Experiments and may reside in various repositories. Each Task is assigned to different engineers and tracked via GitHub Project boards. Tasks are completed when a documented PR is submitted, regardless of the destination repository. The Experiment owner sets the acceptance criteria and ensures that Tasks contribute directly to the broader goals of the Experiment. + Tasks are engineering work items (sub-issues of Experiments) assigned to individual engineers and tracked via GitHub Project boards. Tasks may reside in various repositories and are completed when documented PRs are submitted. The Experiment owner sets acceptance criteria and ensures Tasks contribute to broader Experiment goals. ## Getting Started diff --git a/docs/ADR/adr-template-case-decomp.md b/docs/ADR/adr-template-case-decomp.md new file mode 100644 index 0000000..59cca39 --- /dev/null +++ b/docs/ADR/adr-template-case-decomp.md @@ -0,0 +1,161 @@ +--- +name: ADR - Case Decomposition +about: Documents how a case was decomposed into experiments and why +title: 'ADR #[ID]: Case Decomposition - [Case Name]' +labels: 'adr, case-decomposition' +assignees: '' +--- + +# ADR #[ID]: Case Decomposition - [Case Name] + +*This ADR documents the decomposition of a case into implementing experiments. For full case context, see the Case Brief.* + +## Date + +**Decision date:** YYYY-MM-DD +**Last status update:** YYYY-MM-DD + +## Status + +- [ ] Proposed +- [ ] Accepted +- [ ] Deprecated +- [ ] Superseded + +### Implementation Status + +- [ ] Planned +- [ ] In Development +- [ ] Implemented +- [ ] Verified +- [ ] Discontinued + +## People + +### Decision Owner +[Person/team accountable for this decomposition decision] + +### Consulted (Architects/Tech Leads) +- [Person 1] +- [Person 2] + +### Informed (Affected Teams) +- [ ] [Person 1] +- [ ] [Person 2] + +*Note: Check your name after reading this ADR.* + +## Decision + +**Parent Case:** [Link to Case Brief] + +**Experiment List:** + +1. [Experiment Name/ID] - [Brief description] + - Addresses acceptance criteria: [Link to specific criterion in Case] + - Focus: [What this experiment validates/builds] + +2. [Experiment Name/ID] - [Brief description] + - Addresses acceptance criteria: [Link to specific criterion in Case] + - Focus: [What this experiment validates/builds] + +3. [Experiment Name/ID] - [Brief description] + - Addresses acceptance criteria: [Link to specific criterion in Case] + - Focus: [What this experiment validates/builds] + +[Continue for all experiments...] + +**Decomposition Approach:** [Brief 1-2 sentence summary - e.g., "Split by user-facing features to enable parallel development" or "Layered approach from infrastructure to UI"] + +## Decision in Details *Optional* + +[Use this section to explain the rationale for this specific decomposition] + +**Why This Decomposition:** +- [Key reason 1 - e.g., "Enables parallel work streams"] +- [Key reason 2 - e.g., "Validates riskiest assumptions first"] +- [Key reason 3 - e.g., "Aligns with team expertise"] + +**Decomposition Logic:** +[Explain how you arrived at this particular split - by feature? by layer? by risk? by dependency?] + +**Sequencing Considerations:** +[If experiment order matters, explain why certain experiments must come before others] + +**Team/Resource Considerations:** +[If team structure or available skills influenced the decomposition] + +## Options *Optional* + +[Use only if alternative decomposition approaches were seriously considered. Keep compact.] + +**Alternative approaches considered:** + +### Option 1: [Brief name] (NOT SELECTED) +- **Approach:** [1 sentence description] +- **Why not:** [Brief reason for rejection] + +### Option 2: [Brief name] (NOT SELECTED) +- **Approach:** [1 sentence description] +- **Why not:** [Brief reason for rejection] + +## Consequences *Optional* + +[Use only if there are significant implications worth documenting. Keep compact.] + +**Positive Consequences:** +- [Benefit of this decomposition approach] +- [What this enables] + +**Trade-offs Accepted:** +- [What we're giving up with this approach] +- [Challenges this decomposition introduces] + +**Risks:** +- [Risk if experiments don't compose correctly] +- [Mitigation approach] + +## Advice *Optional* + +[Raw, unfiltered input from people who provided advice during decomposition decision] + +- [Advice given] ([Name, Role], YYYY-MM-DD) +- [Advice given] ([Name, Role], YYYY-MM-DD) + +## References & Links + +**Parent Case:** +- [Case Brief](link) - Full case context and acceptance criteria + +**Related Architecture:** +- [Architecture Document](link) - System design influencing decomposition + +**Related Cases:** +- [Case #X](link) - Related case that influenced this decomposition + +**Background Material:** +- [Meeting notes](link) +- [Technical research](link) +- [Similar decomposition examples](link) + +## ADR Relationships + +### Supersedes +- ADR #[X]: [Previous decomposition decision if this replaces it] + +### Superseded By +- ADR #[X]: [Future decomposition decision if this gets replaced] + +### Related ADRs +- ADR #[X]: [Technical decision that influenced this decomposition] +- ADR #[X]: [Other case decomposition that this coordinates with] + +--- + +**Checklist Before Accepting:** +- [ ] All case acceptance criteria mapped to experiments? +- [ ] Experiment sequence makes sense? +- [ ] No major gaps in coverage? +- [ ] Team capacity considered? +- [ ] Dependencies between experiments identified? +- [ ] Discussed in architectural forum if required? \ No newline at end of file