|
| 1 | +# Customer-Driven Copy Positioning Design |
| 2 | + |
| 3 | +**Date:** 2025-10-29 |
| 4 | +**Status:** Approved |
| 5 | +**Complexity:** 3 story points |
| 6 | + |
| 7 | +## Context |
| 8 | + |
| 9 | +This design documents the customer-driven, practitioner-first positioning for the portfolio site. It positions direct engagement with domain experts (1000+ Statsbomb collectors, Wise content teams) as a core architectural skill, not an afterthought or soft skill. |
| 10 | + |
| 11 | +## Core Thesis Statements |
| 12 | + |
| 13 | +Extracted via `/wri:thesis` from real Statsbomb and Wise stories: |
| 14 | + |
| 15 | +1. **"Architecture emerges from interpreting workflow friction, not transcribing feature requests - inference beats explicit requirements"** |
| 16 | +2. **"Systems designed for human capability shift validation to prevention, letting practitioners focus on judgment over correction"** |
| 17 | + |
| 18 | +## Real Stories Foundation |
| 19 | + |
| 20 | +### Statsbomb Stories |
| 21 | + |
| 22 | +**2-Collectors-Per-Match Insight:** |
| 23 | +- **Before:** 2 collectors per match, each following a team (redundancy on paper) |
| 24 | +- **Practitioner insight:** Daily conversations revealed this was duplicated effort, not collaboration |
| 25 | +- **Architectural decision:** Break matches by decision, not team → increased correctness without additional effort |
| 26 | + |
| 27 | +**Valid by Default Evolution:** |
| 28 | +- **Before:** Validate everything after collection (collectors caught all errors) |
| 29 | +- **Practitioner insight:** Observing collectors revealed human capability limits - can't maintain focus while constantly catching errors |
| 30 | +- **Architectural decision:** Computer vision + linting catches 99%, humans handle 1% edge cases requiring judgment |
| 31 | + |
| 32 | +**Event Storming Domain Mapping:** |
| 33 | +- **Context:** Lead collectors helped map domains (information, collection, operations, media, aggregation) |
| 34 | +- **Insight:** "pass, pass, dribble" are instantaneous events AND durational carry; multiple carries = possession |
| 35 | +- **Impact:** System needed to model facts at atomic and aggregate levels simultaneously |
| 36 | + |
| 37 | +**Keyboard Experience & Muscle Memory:** |
| 38 | +- **Insight:** Contextual keyboard mappings reduce decision-making load |
| 39 | +- **Impact:** Collectors develop muscle memory, work faster without thinking |
| 40 | + |
| 41 | +### Wise Stories |
| 42 | + |
| 43 | +**Layer Inference from Friction:** |
| 44 | +- **Context:** Content teams (designers, developers, copy creators) as main users of Editorial stack |
| 45 | +- **Practitioner behavior:** Daily workflow friction showed separation needs |
| 46 | +- **Architectural decision:** Constructing (lower layer) → Governance (approval/compliance) → Guidance (on-brand) |
| 47 | +- **Key insight:** Teams never explicitly requested this separation - it was inferred from watching their work |
| 48 | + |
| 49 | +**DX & Adoption Balance:** |
| 50 | +- **Goal:** Enhance developer experience and adoption while maintaining creativity, flexibility, on-brand without thinking |
| 51 | +- **Approach:** Iterative feedback loops with teams using the stack |
| 52 | + |
| 53 | +## Copy Implementation |
| 54 | + |
| 55 | +### About Page: "Architecture from the Source" Section |
| 56 | + |
| 57 | +**Placement:** After technical expertise, before highlights |
| 58 | +**Word count:** ~205 words |
| 59 | +**Purpose:** Position practitioner engagement as core architectural skill |
| 60 | + |
| 61 | +--- |
| 62 | + |
| 63 | +**Architecture from the Source** |
| 64 | + |
| 65 | +Over 15 years, a pattern emerged: architecture doesn't come from transcribing feature requests - it emerges from interpreting workflow friction. At Statsbomb, daily conversations with 1000+ collectors revealed not just bugs, but what mattered: keyboard experiences that built muscle memory, decision flows that reduced cognitive load, collaboration patterns that eliminated rework. When collectors described working "2 per match, each following a team," they weren't asking for a feature - they were showing us redundant effort. Breaking matches down by decision rather than team wasn't on any requirements doc. |
| 66 | + |
| 67 | +The best systems shift validation to prevention, letting practitioners focus on judgment over correction. Event storming sessions with lead collectors mapped domains - collection, operations, media, aggregation - revealing that "pass, pass, dribble" needed to be both instantaneous events and durational carry. That insight became architectural: valid by default, with computer vision and linting catching 99% of issues, leaving humans for the 1% edge cases requiring actual judgment. |
| 68 | + |
| 69 | +At Wise, content teams never asked to "separate constructing from governance" - but watching their daily friction revealed the layers. When architecture responds to what practitioners *show* through their work, systems fit their reality instead of forcing reality to fit systems. |
| 70 | + |
| 71 | +--- |
| 72 | + |
| 73 | +### Case Study Narrative Integration Patterns |
| 74 | + |
| 75 | +#### Pattern 1: Problem Section with Practitioner Context |
| 76 | + |
| 77 | +**Structure:** |
| 78 | +1. Open with question (Socratic authority) |
| 79 | +2. Ground in practitioner constraint/insight |
| 80 | +3. Show what was revealed through conversation |
| 81 | +4. Connect to architectural decision |
| 82 | + |
| 83 | +**Example (Statsbomb):** |
| 84 | + |
| 85 | +> How do you design a collection system when collectors process 90-minute matches in real-time? Daily conversations with collectors revealed the actual constraints: they needed keyboard flows that built muscle memory, contextual mappings that reduced decision-making load, and collaboration patterns that eliminated rework. The existing approach—2 collectors per match, each following a team—looked like redundancy on paper. Talking with collectors showed it was duplicated effort. |
| 86 | +> |
| 87 | +> Event storming sessions with lead collectors mapped the domain into contexts: information, collection, operations, media, aggregation. One insight reshaped the architecture: "pass, pass, dribble" were instantaneous events *and* durational carry. Multiple carries became possession. The system needed to model facts at atomic and aggregate levels simultaneously—something requirements documents wouldn't have surfaced. |
| 88 | +
|
| 89 | +#### Pattern 2: Architecture Section with Inference |
| 90 | + |
| 91 | +**Structure:** |
| 92 | +1. State thesis ("architecture emerged from interpreting friction") |
| 93 | +2. Show specific decision that wasn't explicitly requested |
| 94 | +3. Explain the shift (validation → prevention, or other principle) |
| 95 | +4. Connect to human capability/practitioner reality |
| 96 | + |
| 97 | +**Example (Statsbomb):** |
| 98 | + |
| 99 | +> Architecture emerged from interpreting workflow friction, not transcribing requests. Breaking matches down by decision rather than team—something no collector explicitly asked for—increased correctness without additional effort. The collection experience was redesigned to be valid by default: computer vision assisted input, contextual keyboard mappings reduced cognitive load, and linting caught 99% of errors automatically. |
| 100 | +> |
| 101 | +> This shifted validation to prevention. Collectors focused on judgment over correction—handling the 1% edge cases where human expertise mattered (event type conflicts, judgment calls on data points) rather than catching preventable mistakes. Systems designed for human capability let practitioners work within their actual limits, not aspirational ones. |
| 102 | +
|
| 103 | +**Example (Wise):** |
| 104 | + |
| 105 | +> When content teams at Wise surfaced daily workflow friction, the Editorial stack had to respond. They didn't ask to "separate constructing from governance"—but watching their work revealed the layers. Constructing (building content) was a lower-level concern than governance (approval flows, compliance) and guidance (staying on-brand without thinking). |
| 106 | +> |
| 107 | +> Inference beat explicit requirements. The architecture separated layers not because teams requested it, but because their friction showed what mattered: rapid iteration, cross-team collaboration, creativity within constraints, and satisfactory flexibility without sacrificing brand consistency. Systems that interpret workflow friction fit practitioner reality instead of forcing reality to fit the system. |
| 108 | +
|
| 109 | +## Tone Compliance Checklist |
| 110 | + |
| 111 | +Per CLAUDE.md tone guidelines (Philosophical + Humble + Collaborative): |
| 112 | + |
| 113 | +- [x] **Question-led authority** - Opens with Socratic questions, not claims |
| 114 | +- [x] **Collaborative framing** - "conversations revealed", "watching their work", "teams surfaced" |
| 115 | +- [x] **Philosophical systems thinking** - "architecture emerges", "inference beats requirements", "validation to prevention" |
| 116 | +- [x] **Humble learning orientation** - "pattern emerged", "revealed", "showed us" |
| 117 | +- [x] **Evidence-based confidence** - 15 years, 1000+ collectors, specific companies, 99% metric |
| 118 | +- [x] **No arrogance** - Zero "I built X" without collaborative context |
| 119 | +- [x] **Invitational CTAs** - Not applicable in this content (no CTAs in About/case studies) |
| 120 | + |
| 121 | +## Forbidden Patterns Avoided |
| 122 | + |
| 123 | +- ❌ Bold claims: "I build systems that can't break" |
| 124 | +- ❌ Hero narrative: "I designed", "I achieved" (without team context) |
| 125 | +- ❌ Definitive statements without humility: "The right question makes implementation obvious" |
| 126 | +- ❌ Generic platitudes: "listen to users", "customer-first" (without real stories) |
| 127 | + |
| 128 | +## Implementation Notes |
| 129 | + |
| 130 | +### Files to Update |
| 131 | + |
| 132 | +1. **About page:** `src/pages/about.astro` |
| 133 | + - Add "Architecture from the Source" section |
| 134 | + - Place after expertise, before highlights |
| 135 | + - ~205 words |
| 136 | + |
| 137 | +2. **Case studies:** Apply narrative integration patterns |
| 138 | + - Problem sections: Open with practitioner constraint question |
| 139 | + - Architecture sections: Show inference and validation→prevention shift |
| 140 | + - Use real stories from this document |
| 141 | + |
| 142 | +### Maintenance |
| 143 | + |
| 144 | +- When adding new case studies: Use narrative integration patterns |
| 145 | +- When describing new work: Apply thesis statements naturally |
| 146 | +- Always ground in real practitioner stories, not generic positioning |
| 147 | +- Run tone compliance checklist before publishing |
| 148 | + |
| 149 | +## Success Metrics |
| 150 | + |
| 151 | +- About page explicitly positions practitioner engagement as architectural skill |
| 152 | +- Case studies show HOW work happened (event storming, daily conversations, inference) |
| 153 | +- Zero generic "listen to users" language - all backed by real stories |
| 154 | +- Tone audit passes (collaborative, humble, evidence-based) |
| 155 | + |
| 156 | +--- |
| 157 | + |
| 158 | +## Refactoring: Eliminating Redundancy (2025-10-29) |
| 159 | + |
| 160 | +### Issues Identified |
| 161 | + |
| 162 | +1. **About page redundancy**: "Architecture from the Source" section repeated specific Statsbomb stories that belong in case study |
| 163 | +2. **Pass/carry insight incomplete**: Original description said "instantaneous events AND durational carry"—missed the core insight about arbitrary aggregation from atomic facts |
| 164 | +3. **Missing metadata story**: Golden entity resolution (5 people supporting thousands) demonstrates same thesis as event collection |
| 165 | +4. **Ops scaling underemphasized**: 100 → thousands of collectors context needed |
| 166 | + |
| 167 | +### Refactoring Decisions |
| 168 | + |
| 169 | +#### About Page (src/pages/about.astro:59-71) |
| 170 | +**Before:** Full Statsbomb stories with specific details (2-collectors, event storming, computer vision, 99% metrics) |
| 171 | +**After:** Abstract principles applicable to any project |
| 172 | +- Removed all company-specific examples |
| 173 | +- Kept thesis statements as general principles |
| 174 | +- Reduced from ~205 to ~150 words |
| 175 | +- Maintains philosophical + humble + collaborative tone |
| 176 | + |
| 177 | +**Rationale:** About page should position the APPROACH, case studies show the EXECUTION |
| 178 | + |
| 179 | +#### Statsbomb Case Study |
| 180 | + |
| 181 | +**Problem Section - Added metadata challenge:** |
| 182 | +- New paragraph after event storming insight |
| 183 | +- Emphasizes different architectural problem: crowd-sourced reference data vs real-time event collection |
| 184 | +- Shows 5-person team supporting thousands (architectural leverage) |
| 185 | + |
| 186 | +**Architecture Section - Split into subsections:** |
| 187 | + |
| 188 | +1. **"Event Collection: Aggregation from Atomic Facts"** |
| 189 | + - Fixed pass/carry insight: emphasizes arbitrary aggregation (pass+carry → possession) |
| 190 | + - Valid by default with computer vision + linting |
| 191 | + - Ops scaling (100 → thousands) |
| 192 | + - DSL for configuration-driven sports rules |
| 193 | + |
| 194 | +2. **"Metadata Management: Automated Entity Resolution"** |
| 195 | + - Golden entity resolution from crowd-sourced data |
| 196 | + - 5 people supporting thousands via automation |
| 197 | + - Confidence scoring + edge case escalation |
| 198 | + - Same thesis: validation → prevention, judgment > correction |
| 199 | + |
| 200 | +**Rationale:** Two parallel stories demonstrate thesis breadth—not just event collection, but reference data management at scale |
| 201 | + |
| 202 | +### Updated Real Stories |
| 203 | + |
| 204 | +**Arbitrary Aggregation Insight:** |
| 205 | +- **Context:** Event storming with lead collectors |
| 206 | +- **Insight:** Individual passes and carries are atomic events; multiple carries in sequence become "possession" (derived higher-level fact) |
| 207 | +- **Impact:** System needed to support any aggregation pattern, not just predefined ones |
| 208 | + |
| 209 | +**Metadata Architectural Leverage:** |
| 210 | +- **Challenge:** Thousands of collectors, crowd-sourced data, conflicting sources (same player, different spellings) |
| 211 | +- **Constraint:** 5-person metadata team couldn't manually reconcile |
| 212 | +- **Solution:** Automated golden entity resolution with confidence scoring |
| 213 | +- **Result:** 5 people supporting thousands via architectural leverage (1-2% edge cases require human domain expertise) |
| 214 | + |
| 215 | +--- |
| 216 | + |
| 217 | +**Document Status:** Complete, validated, and refactored to eliminate redundancy |
| 218 | +**Implementation Status:** All changes deployed to About page and Statsbomb case study |
0 commit comments