diff --git a/.github/agents/implement-story.agent.md b/.github/agents/implement-story.agent.md
new file mode 100644
index 00000000..224c5532
--- /dev/null
+++ b/.github/agents/implement-story.agent.md
@@ -0,0 +1,228 @@
+---
+description: Implements a feature that automates the development of a user story in a markdown file using multiple MCP (Model Context Protocol) servers.
+---
+
+# Copilot Prompt: Implement Feature
+
+You are a **senior software engineer** implementing a feature using **test-driven development**. Your goal is to understand the story, gather all supplemental resources (Figma links), and build the functionality test-first, prioritizing component variants before feature composition.
+
+**Critical: Consult available skills before implementing.** Skills contain domain-specific knowledge and specialized workflows that should guide your approach.
+
+---
+
+## Implementation Priority
+
+**Critical: Follow this order:**
+
+1. **Design Discovery** - Gather all design context from Figma
+2. **Component Implementation** - Build UI components
+3. **Feature Composition** - Assemble components into the complete feature
+4. **Backend Logic** - Database schema, tRPC procedures, seed data
+
+---
+
+## Step 1: Parse the Ticket Content
+
+- Scan the description and comments for:
+ - Figma links (e.g. `https://www.figma.com/file/...`)
+ - Any references to file attachments
+- **Consult skills**: Check if any skills provide guidance for parsing or understanding ticket content
+
+---
+
+## Step 2: Design Discovery
+
+**Understanding Figma Link Types:**
+
+There are **TWO acceptable types** of Figma links in specifications:
+
+1. **Example Links** - Show the feature in context (Frame, Section, Group nodes)
+ - Purpose: Understand placement, layout, and business logic
+ - Used for: Composition guidance, feature integration context
+ - Action: Get screenshot and understand structure
+
+2. **Component Set Links** - Point to actual Figma Component Sets
+ - Purpose: Create React components with variants
+ - **CRITICAL**: Must be a `COMPONENT_SET` node type - nothing else is acceptable
+ - Action: Follow TDD workflow via component-set-testing skill
+ - **If not a COMPONENT_SET**: STOP and report invalid link
+
+**Figma URL Processing Workflow:**
+
+1. **Consult skills FIRST:**
+ - Review available skills for guidance on handling Figma URLs
+ - Check if skills provide workflows for determining node types, extracting design values, or routing to specialized implementations
+ - Follow skill-provided workflows exactly if they exist
+
+2. **Classify each Figma URL:**
+ - Extract `fileKey` and `nodeId` from the URL
+ - Call `mcp_figma_get_design_context` to determine the node type
+ - **Validate the link type:**
+ - If labeled as "component set" but `node.type !== "COMPONENT_SET"`:
+ - **STOP immediately**
+ - Report: "Invalid Figma link: Expected COMPONENT_SET but found [actual type]. This link cannot be used for component implementation. Please provide a link to an actual Figma Component Set."
+ - Do not proceed with implementation
+ - If `node.type === "COMPONENT_SET"`:
+ - Valid component set link - proceed with TDD workflow
+ - If `node.type` is Frame/Section/Group:
+ - Valid example link - use for context and composition
+
+3. **Process valid links by type:**
+ - **Example links (Frame/Section/Group):**
+ - Call `mcp_figma_get_screenshot` for visual reference
+ - Call `mcp_figma_get_variable_defs` to track exact values
+ - Use for understanding feature composition and layout
+
+ - **Component Set links (COMPONENT_SET):**
+ - Follow component-set-testing skill for TDD workflow
+ - Call `mcp_figma_get_screenshot` for visual reference
+ - Call `mcp_figma_get_variable_defs` to track exact values
+ - Generate tests first, then implement component
+
+4. **Check skills for implementation workflows:**
+ - Based on the node type discovered, consult skills for specialized implementation approaches
+ - Skills may provide guidance on:
+ - When to write tests first vs. implementation first
+ - How to handle multi-variant components
+ - How to structure component files and tests
+ - How to verify implementation against design
+ - Follow skill workflows exactly when provided
+ - Only fall back to standard approaches if no relevant skill guidance exists
+
+5. **Repeat for each Figma URL** in the ticket before proceeding
+
+---
+
+## Step 3: Component Implementation
+
+**Before writing any component code:**
+
+1. **Consult skills for component implementation guidance:**
+ - Check if skills provide testing strategies for the component type
+ - Review skills for guidance on component structure and organization
+ - Look for skills that specify when to write tests first vs. implementation first
+ - Follow skill-provided workflows exactly
+
+2. **If no skill guidance exists, use standard TDD approach:**
+ - For multi-variant components: Create test file first, then implement
+ - For single components: Implement using tracked design values, then test
+ - Write tests incrementally and run them frequently
+
+3. **During implementation - Critical styling requirements:**
+ - **Spacing:** Check EVERY element in Figma for padding, margin, gap, dimensions
+ - **Pseudo-classes:** If element is interactive, add appropriate pseudo-class styles (hover:, focus:, active:)
+ - **Event handlers:** Wire up event handler props and verify they work correctly
+ - **Colors:** Apply pseudo-class styles to ALL child elements (icon + text), not just parent
+
+4. **After implementation - MANDATORY validation checkpoint:**
+ - **π STOP - Run implementation-validation skill before proceeding**
+ - Consult implementation-validation skill for complete checklist
+ - Verify all spacing, pseudo-class styles, event handlers are present
+ - Manually test component in browser/Storybook
+ - Only proceed after ALL validation items pass
+
+---
+
+## Step 4: Feature Composition
+
+**After components are implemented and validated:**
+
+1. **Assemble components:**
+ - Integrate individual components into feature layout
+ - Follow example Figma links for composition guidance
+ - Apply tracked spacing values between components
+
+2. **Validate composition:**
+ - **π STOP - Run implementation-validation skill on composed feature**
+ - Check spacing between major sections
+ - Verify interactive flows work end-to-end
+ - Test all event handlers and pseudo-class states in browser
+
+---
+
+## Step 5: Backend Implementation
+
+Only after frontend components and composition are complete:
+
+### Architecture & Conventions
+
+- **Consult skills**: Check for guidance on backend patterns, type management, or database schema design
+- Follow existing team conventions and project architecture
+- Prioritize clarity, maintainability, and modularity
+- Only implement what is explicitly described in the ticketβdo not add additional features
+
+### Test-Driven Implementation
+
+- **Consult skills**: Check for testing strategies or patterns for backend code
+- Before modifying existing code, search for related tests using grep_search or file_search
+- When adding fields to database models or API responses:
+ - **Consult skills**: Check for guidance on managing types across packages
+ - Identify all tests that mock or assert on those structures
+ - Update tests in parallel with implementation changes
+ - Run tests immediately after modifying code to catch issues early
+- For new functionality:
+ - Create tests alongside implementation, not as a final step
+ - Run tests incrementally to verify each change works
+- When modifying tRPC procedures:
+ - Check router.test.ts for existing tests
+ - Update mock data to match new response structures
+ - Ensure optimistic updates in client code align with server responses
+- Implementation workflow for changes to existing code:
+ 1. Search for related tests before making changes (grep_search for test files)
+ 2. Review tracked values from Step 2 (Design Discovery)
+ 3. Make implementation changes matching Figma exactly
+ 4. Update tests to match new behavior/structure
+ 5. Run tests immediately to verify
+ 6. Only proceed to next change after tests pass
+
+### UI Implementation Fidelity
+
+- **Consult skills**: Check for guidance on styling precision, Tailwind usage, or design token mapping
+- **Use ONLY tracked values:**
+ - All measurements must come from `mcp_figma_get_variable_defs`
+ - Never estimate or assume values
+ - Match exact element ordering and hierarchy from screenshots
+
+- **Styling precision (if no skill guidance):**
+ - **Dimensions:** Use exact tracked pixel values (e.g., `h-[21px]`, `w-[18px]`)
+ - **Spacing:** Map tracked variables to Tailwind (e.g., Spacing/xs=4px β `p-1`)
+ - **Typography:** Use tracked font values from design variables
+ - **Colors:** Use exact tracked color values or design system tokens
+ - **Layout:** Match flex direction, alignment, and order from screenshots
+
+- When Tailwind doesn't have exact values, use arbitrary values: `h-[21px]`, `gap-[18px]`
+- Consult skills for guidance on simplifying Tailwind utilities
+ - Update mock data to match new response structures
+ - Ensure optimistic updates in client code align with server responses
+- Implementation workflow for changes to existing code:
+ 1. Search for related tests before making changes (grep_search for test files)
+ 2. Review tracked values from Step 2 (Design Discovery)
+ 3. Make implementation changes matching Figma exactly
+ 4. Update tests to match new behavior/structure
+ 5. Run tests immediately to verify
+ 6. Only proceed to next change after tests pass
+
+### UI Implementation Fidelity
+
+- **Use ONLY tracked values:**
+ - All measurements must come from `mcp_figma_get_variable_defs`
+ - Never estimate or assume values
+ - Match exact element ordering and hierarchy from screenshots
+
+- **Styling precision:**
+ - **Dimensions:** Use exact tracked pixel values (e.g., `h-[21px]`, `w-[18px]`)
+ - **Spacing:** Map tracked variables to Tailwind (e.g., Spacing/xs=4px β `p-1`)
+ - **Typography:** Use tracked font values from design variables
+ - **Colors:** Use exact tracked color values or design system tokens
+ - **Layout:** Match flex direction, alignment, and order from screenshots
+
+- When Tailwind doesn't have exact values, use arbitrary values: `h-[21px]`, `gap-[18px]`
+- Simplify Tailwind utilities when possible using standard spacing scale values
+
+### Seed Data
+
+- Update seed data to demonstrate component variations:
+ - Create records for edge cases (zero values, min, max, mixed states)
+ - Add inline comments indicating which variation each record demonstrates
+ - Ensure realistic data distribution across all possible states
+ - Update seed file clearing logic to include any new database tables
diff --git a/.github/skills/component-set-test-driven-development/SKILL.md b/.github/skills/component-set-test-driven-development/SKILL.md
new file mode 100644
index 00000000..3e4f9226
--- /dev/null
+++ b/.github/skills/component-set-test-driven-development/SKILL.md
@@ -0,0 +1,1704 @@
+---
+name: component-set-test-driven-development
+description: π MANDATORY for Figma Component Sets. Test-driven development workflow that MUST be executed BEFORE writing any component code. Discovers variants, generates failing tests first, then guides implementation to pass tests. Used automatically by figma-url-router when Component Set is detected.
+---
+
+# Skill: Component Set Testing (Test-Driven Development)
+
+**β οΈ CRITICAL: This skill MUST be executed BEFORE writing component implementation code.**
+
+This skill enforces test-driven development for React components based on Figma Component Sets. Tests are written first based on Figma variants, then implementation follows to make tests pass.
+
+## Overview
+
+When working with Figma Component Sets, this skill automates:
+1. Component creation/verification in the codebase
+2. Discovery of all variant properties and their values
+3. Generation of comprehensive unit tests for every combination
+4. Cross-verification of values against design variables
+5. Documentation of visual and behavioral differences
+6. **Mandatory validation** before implementation begins
+
+## When to Use
+
+**β οΈ YOU WILL BE ROUTED HERE** - This skill is automatically invoked by the figma-url-router skill when a COMPONENT_SET is detected.
+
+**DO NOT USE THIS SKILL DIRECTLY** - Instead:
+1. Use figma-url-router skill when you see ANY Figma URL
+2. The router will detect the node type and route you here if needed
+3. If you're here, follow the TDD process exactly
+
+**Manual override only if:**
+- Auditing existing component tests for compliance
+- Verifying test coverage post-implementation
+
+## Critical Success Factors
+
+**π STOP AND READ - These are non-negotiable:**
+
+1. **TESTS MUST BE WRITTEN FIRST** - Component file does NOT exist until Phase 6
+2. **TRACKING TABLE IS MANDATORY** - Phase 3 must produce a structured table
+3. **VALIDATION SCRIPT MUST PASS** - Run validate-tests.sh before implementation
+4. **EVERY TEST NEEDS VARIANT COMMENTS** - Format: `// Figma Variant: Property=Value`
+5. **ALL VALUES FROM DESIGN VARIABLES** - No guessing, no approximations
+
+**If ANY of these are violated, the implementation MUST be rejected and redone.**
+
+## Workflow Overview (Before Phase Details)
+
+```
+figma-url-router detects COMPONENT_SET
+ β
+ Phase 0: Verify Component Set
+ β
+ Phase 1: Create TEST file first (NO component yet)
+ β
+ Phase 2: Discover all variants
+ β
+ Phase 2.4: Create describe blocks with TODO placeholders
+ β
+ Phase 3: Gather test requirements from THREE sources (ALL AS TODOs)
+ ββ 3.1: Analyze variant names β predict test TODOs
+ ββ 3.2: Capture screenshots β confirm/expand test TODOs
+ ββ 3.3: Analyze spec/context β add application state TODOs
+ ββ 3.4: Create tracking table from all TODOs
+ β
+ Phase 4: Verify ALL values against design variables
+ β
+ Phase 5: Implement tests (fill in TODOs from Phase 3)
+ β
+ π CHECKPOINT: Phase 5.4 validation + run validate-tests.sh
+ β
+ Phase 6: NOW create component file
+ β
+ Phase 7: Red-Green-Refactor until all tests pass
+ β
+ π FINAL: Run validate-tests.sh again
+```
+
+## Phase 0: Verify Component Set (DO THIS FIRST)
+
+**MCP Tools Used:**
+- `mcp_figma_get_design_context` - Verify node type is COMPONENT_SET
+- `grep_search` or `file_search` - Check if component already exists
+
+**Skills Referenced:**
+- `mcp-error-handling` - Handle MCP tool errors during verification
+
+#### 0.1 Identify the Figma Node
+- Extract `fileKey` and `nodeId` from the Figma URL
+- URLs follow pattern: `https://figma.com/design/:fileKey/:fileName?node-id=1-2`
+- Node ID extraction: `1-2` β `1:2`
+
+#### 0.2 Check if Component Set
+```typescript
+// REQUIRED: Call this BEFORE any implementation
+const context = await mcp_figma_get_design_context({
+ nodeId: "1:2",
+ fileKey: "abc123"
+});
+
+// Check the document type
+if (context.document.type === "COMPONENT_SET") {
+ // β
This is a Component Set - PROCEED WITH THIS SKILL
+ // β DO NOT write component code yet
+ // β
Write tests first using this skill
+} else {
+ // This is a regular component/frame - use normal implementation flow
+}
+```
+
+#### 0.3 Verify Component Does NOT Exist Yet
+- Search codebase: `grep_search` or `file_search` for component name
+- If component already exists β use this skill for test auditing only
+- If component is missing β GOOD, proceed with TDD approach
+- **DO NOT create component file yet** - tests come first
+
+### Phase 1: Test File Creation (BEFORE Component)
+
+**MCP Tools Used:**
+- `file_search` - Find existing test file patterns
+- `grep_search` - Search for component existence
+- `create_file` - Create test file with describe block structure
+
+**Skills Referenced:**
+- `mcp-error-handling` - Handle MCP tool errors during file operations
+
+#### 1.1 Create Test File First
+- Search for existing component test files to understand project structure:
+ - Use `file_search` with pattern `**/*.test.tsx` or `**/*.test.ts`
+ - Identify the common directory pattern (e.g., `src/components/`, `app/components/`, `lib/`)
+- Create test file following the discovered pattern:
+ - Typically: `[ComponentName].test.tsx` adjacent to where component will live
+ - Common patterns: `ComponentName/ComponentName.test.tsx` or flat `ComponentName.test.tsx`
+- If no pattern exists, follow project README or ask user for preferred location
+- Create directory structure if it doesn't exist
+- Start with empty test file and describe blocks
+- **Component implementation file does NOT exist yet**
+- Extract `fileKey` and `nodeId` from the Figma URL
+- URLs follow pattern: `https://figma.com/design/:fileKey/:fileName?node-id=1-2`
+- Node ID extraction: `1-2` β `1:2`
+
+#### 1.2 Verify Component Exists
+- Check if a React component exists for this Component Set
+- Search codebase: `grep_search` or `file_search` for component name
+- If missing, note that component file will be created later (Phase 6)
+- Component file should follow same directory pattern as test file
+
+### Phase 2: Variant Discovery
+
+**MCP Tools Used:**
+- `mcp_figma_get_design_context` - Load component structure and variant definitions
+
+**Skills Referenced:**
+- `mcp-error-handling` - Handle MCP tool errors during variant discovery
+
+#### 2.1 Load Component Set Metadata
+Call `mcp_figma_get_design_context` to load the full component structure:
+- This returns component hierarchy and `componentPropertyDefinitions`
+- Document type is `COMPONENT_SET` for multi-variant components
+
+#### 2.2 Parse Variant Properties
+From `componentPropertyDefinitions`, extract:
+- **Property names**: State, Size, Type, etc.
+- **Property values**: Each possible option (e.g., State: [Default, Hover, Active, Disabled])
+- **Property types**: VARIANT (boolean/string), TEXT, INSTANCE_SWAP
+
+Example structure:
+```json
+{
+ "componentPropertyDefinitions": {
+ "State": {
+ "type": "VARIANT",
+ "defaultValue": "Default",
+ "variantOptions": ["Default", "Hover", "Active", "Disabled"]
+ },
+ "Size": {
+ "type": "VARIANT",
+ "defaultValue": "Medium",
+ "variantOptions": ["Small", "Medium", "Large"]
+ }
+ }
+}
+```
+
+#### 2.3 Generate All Combinations
+Create cartesian product of all variant properties:
+- State Γ Size = 4 Γ 3 = 12 total combinations
+- Document each combination: `[State=Default, Size=Small]`, `[State=Default, Size=Medium]`, etc.
+
+#### 2.4 Create Test Structure with TODO Placeholders
+
+**π CRITICAL: Immediately after discovering all variant combinations, create the complete test file structure with TODO placeholders.**
+
+**Step 1: Create describe blocks for EVERY variant property and value combination:**
+
+1. Use the variant combinations just generated in Phase 2.3
+2. For EACH unique combination of properties and values, create a describe block in the test file
+3. Add a comment above EACH describe block indicating which variant properties and values it covers
+4. Leave TODO placeholders inside each describe block - DO NOT write actual test code yet
+
+**Required Comment Format:**
+```typescript
+// Figma Variant: Property1=Value1, Property2=Value2
+describe('Descriptive name for this variant combination', () => {
+ // TODO: Add tests for Property1=Value1, Property2=Value2
+});
+```
+
+**Example structure for a component with State and ShowCount properties:**
+
+```typescript
+describe('Badge', () => {
+ // Figma Variant: State=Default, ShowCount=True
+ describe('Default state with count visible', () => {
+ // TODO: Add tests for State=Default, ShowCount=True
+ });
+
+ // Figma Variant: State=Default, ShowCount=False
+ describe('Default state with count hidden', () => {
+ // TODO: Add tests for State=Default, ShowCount=False
+ });
+
+ // Figma Variant: State=Selected, ShowCount=True
+ describe('Selected state with count visible', () => {
+ // TODO: Add tests for State=Selected, ShowCount=True
+ });
+
+ // Figma Variant: State=Selected, ShowCount=False
+ describe('Selected state with count hidden', () => {
+ // TODO: Add tests for State=Selected, ShowCount=False
+ });
+
+ // Figma Variant: State=Disabled, ShowCount=True
+ describe('Disabled state with count visible', () => {
+ // TODO: Add tests for State=Disabled, ShowCount=True
+ });
+
+ // Figma Variant: State=Disabled, ShowCount=False
+ describe('Disabled state with count hidden', () => {
+ // TODO: Add tests for State=Disabled, ShowCount=False
+ });
+
+ // Structural tests (apply to all variants)
+ describe('Element ordering', () => {
+ // TODO: Add structural tests that apply across all variants
+ });
+});
+```
+
+**Why this step happens NOW (before visual analysis):**
+- β
Creates clear sections for organizing tests by variant BEFORE analyzing differences
+- β
Establishes the comment pattern that MUST be followed from the start
+- β
Makes it obvious which variant combinations need analysis in Phase 3
+- β
Prevents accidentally missing variant combinations during visual analysis
+- β
Provides a checklist of work to complete as you analyze each variant
+
+**Step 2: Review completeness:**
+- [ ] EVERY combination from Phase 2.3 has a describe block with TODO
+- [ ] EVERY describe block has a `// Figma Variant:` comment listing ALL properties and values
+- [ ] Comment format is consistent: `// Figma Variant: Property1=Value1, Property2=Value2`
+- [ ] Test file structure is ready to receive test implementations
+
+**π DO NOT proceed to Phase 3 until all describe blocks with TODO placeholders are created.**
+
+### Phase 3: Visual Analysis & Test Tracking
+
+**MCP Tools Used:**
+- `mcp_figma_get_screenshot` - Capture visual state of each variant
+- `mcp_figma_get_design_context` - Compare variants for missing use cases and conditional styles
+- `mcp_figma_get_variable_defs` - Verify all values against design tokens
+
+**Skills Referenced:**
+- `mcp-error-handling` - Handle MCP tool errors during visual analysis
+
+**Goal:** Systematically gather ALL test requirements from three sources: variant naming, visual differences, and application context. Document as TODOs in tracking table BEFORE implementing any tests.
+
+**π CRITICAL WORKFLOW: Gather β Document β Implement (NOT implement as you go)**
+
+#### 3.1 Analyze Variant Property and Value Names (First Pass - No Screenshots Yet)
+
+**Before looking at screenshots, infer test requirements from variant naming patterns.**
+
+**Step 1: Review each variant property/value combination from Phase 2.4**
+
+For each describe block created in Phase 2.4, analyze the variant property names and values to predict what tests will be needed:
+
+**Common Variant Property Patterns:**
+
+1. **User Interaction States** (e.g., `State=Hover`, `State=Focus`, `State=Active`, `State=Pressed`)
+ - **Inferred Tests (add as TODOs):**
+ - Cursor changes (e.g., `cursor-pointer` for clickable elements)
+ - Event handlers needed (`onClick`, `onHover`, `onFocus`)
+ - Pseudo-class styles needed (`:hover`, `:focus`, `:active`)
+ - Visual feedback on interaction (color changes, background changes)
+ - **Document in describe block:**
+ ```typescript
+ // Figma Variant: State=Hover
+ describe('Hover state', () => {
+ // TODO: Test cursor changes to pointer on interactive elements
+ // TODO: Test hover pseudo-class color changes on ALL child elements (icon + text)
+ // TODO: Test background color change on hover
+ // TODO: Test event handler wiring (onMouseEnter, onMouseLeave)
+ // TODO: After screenshots - check if overlay/tooltip appears
+ });
+ ```
+
+2. **Empty/Zero States** (e.g., `State=Empty`, `Count=0`, `HasData=False`, `ShowLabel=False`)
+ - **Inferred Tests (add as TODOs):**
+ - Conditional rendering of data displays
+ - Structural elements that should ALWAYS show (affordances)
+ - Spacing changes when content is hidden
+ - Empty state messaging or placeholders
+ - **Document in describe block:**
+ ```typescript
+ // Figma Variant: State=Empty, ShowCount=False
+ describe('Empty state with count hidden', () => {
+ // TODO: Test structural/affordance elements still visible (icons, buttons)
+ // TODO: Test data display elements hidden (counts, labels, badges)
+ // TODO: After screenshots - check spacing changes when elements hidden
+ // TODO: After screenshots - check for empty state messaging
+ });
+ ```
+
+3. **Selection/Active States** (e.g., `State=Selected`, `IsActive=True`, `IsCurrent=True`)
+ - **Inferred Tests (add as TODOs):**
+ - Background highlighting
+ - Border changes
+ - Icon/text color changes
+ - Multiple visual changes happening together
+ - **Document in describe block:**
+ ```typescript
+ // Figma Variant: State=Selected
+ describe('Selected state', () => {
+ // TODO: After screenshots - test background color change
+ // TODO: After screenshots - test border color/width change
+ // TODO: After screenshots - test icon color change
+ // TODO: After screenshots - test text color change
+ // TODO: Test selection prop (isSelected={true})
+ });
+ ```
+
+4. **Disabled States** (e.g., `State=Disabled`, `Disabled=True`)
+ - **Inferred Tests (add as TODOs):**
+ - Grayed-out colors
+ - `cursor-not-allowed`
+ - Disabled attribute/prop
+ - Reduced opacity
+ - **Document in describe block:**
+ ```typescript
+ // Figma Variant: State=Disabled
+ describe('Disabled state', () => {
+ // TODO: After screenshots - test gray color application to text/icons
+ // TODO: Test cursor-not-allowed class
+ // TODO: Test disabled prop/attribute
+ // TODO: After screenshots - test opacity changes
+ });
+ ```
+
+5. **Size/Spacing Variants** (e.g., `Size=Small`, `Size=Medium`, `Size=Large`)
+ - **Inferred Tests (add as TODOs):**
+ - Padding changes
+ - Gap changes
+ - Font size changes
+ - Dimension changes (width, height)
+ - **Document in describe block:**
+ ```typescript
+ // Figma Variant: Size=Large
+ describe('Large size variant', () => {
+ // TODO: After screenshots - measure padding values
+ // TODO: After screenshots - measure gap between elements
+ // TODO: After screenshots - measure dimensions (w, h)
+ // TODO: After screenshots - check font size changes
+ });
+ ```
+
+**Step 2: Update all Phase 2.4 describe blocks with these TODO comments**
+
+Go through every describe block created in Phase 2.4 and add relevant TODOs based on the variant names. These are **predictions** of what tests will be needed - we'll confirm with screenshots in Phase 3.2.
+
+**Example of updated describe block:**
+```typescript
+// Figma Variant: State=Hover, ShowCount=True
+describe('Hover state with count visible', () => {
+ // TODO (from naming): Test cursor-pointer on interactive elements
+ // TODO (from naming): Test hover: pseudo-classes for color changes
+ // TODO (from naming): Test event handlers (onMouseEnter, onMouseLeave)
+ // TODO (from naming): Confirm ALL child elements change color on hover
+ // TODO (deferred to 3.2): After screenshots - check if overlay appears
+ // TODO (deferred to 3.2): After screenshots - confirm exact color values
+});
+```
+
+**π Checkpoint: Naming-based TODOs complete**
+- [ ] Every describe block has TODOs based on variant property/value names
+- [ ] TODOs marked as "from naming" vs "deferred to 3.2" (screenshots)
+- [ ] Interaction states have cursor, event handler, and pseudo-class TODOs
+- [ ] Empty states have conditional rendering TODOs
+- [ ] Size variants have spacing/dimension TODOs
+
+#### 3.2 Capture Screenshots and Analyze Visual Differences (Second Pass)
+
+**Now use screenshots AND design context to confirm predictions and discover additional test requirements.**
+
+**Step 1: Get design context for ALL variants**
+For each combination identified in Phase 2.3:
+- Call `mcp_figma_get_design_context` for EACH variant node
+- This provides the actual DOM structure, styles, and conditional rendering
+- **CRITICAL: Compare design context between variants to detect:**
+ - Missing child nodes (conditional rendering)
+ - Style property differences (colors, spacing, sizing)
+ - Values explicitly set to 0 (padding: 0, gap: 0, etc.)
+ - Overlay/absolute positioned nodes that appear in some variants
+
+**Step 2: Capture screenshots for each variant**
+For each combination identified in Phase 2.3:
+- Call `mcp_figma_get_screenshot` with the specific variant node
+- Store screenshots for visual reference
+- Compare against baseline variant (typically first/default)
+
+**Step 3: Get design variables for the component**
+- Call `mcp_figma_get_variable_defs` for the component node
+- This returns all design tokens/variables used (Colors, Spacing, Typography, etc.)
+- **CRITICAL: Use these to verify that values in design context match known variables**
+- **If a value in design context doesn't match a variable, it may be a custom/arbitrary value**
+
+**Step 4: Systematically compare design context and document differences**
+
+**For EACH variant combination:**
+
+**4A. Compare design context between variants:**
+```typescript
+// Example: Compare Rest vs Hover design context
+const restContext = await mcp_figma_get_design_context({ nodeId: "1:2", fileKey: "abc" });
+const hoverContext = await mcp_figma_get_design_context({ nodeId: "1:3", fileKey: "abc" });
+
+// Check for structural differences:
+// - Does hoverContext have additional child nodes? (tooltip, overlay)
+// - Do matching nodes have different style properties?
+// - Are any values explicitly set to 0? (don't assume these are "no spacing")
+```
+
+**4B. Document findings as TODOs:**
+
+1. **Conditional Rendering (What shows/hides) - FROM DESIGN CONTEXT**
+ - **Check design context:** Compare child node lists between variants
+ - Elements that appear/disappear (node exists in one variant but not another)
+ - Overlay/tooltip/dropdown that becomes visible (new absolute/fixed positioned node)
+ - Empty state messaging (text node appears only in empty variant)
+ - **CRITICAL: If a node appears in Hover but not Rest, implement with React state, NOT a state prop**
+ - **Add TODO for each conditional element:**
+ ```typescript
+ // TODO (from design context): Test [element] is hidden when [variant] (node missing in design context)
+ // TODO (from design context): Test [element] appears when [variant] (node present in design context)
+ // TODO (from design context): Verify against variable: [varName] (check with mcp_figma_get_variable_defs)
+ ```
+
+2. **Color Changes (ALL elements, not just container)**
+ - Text color changes
+ - Icon color changes (check EVERY icon)
+ - Background color changes
+ - Border color changes
+ - **Add TODO for each color change:**
+ ```typescript
+ // TODO (from screenshots): Test [element] changes from [color1] to [color2]
+ // TODO (from screenshots): Verify design variable [varName] used
+ ```
+
+3. **Spacing Changes (MOST COMMONLY MISSED) - FROM DESIGN CONTEXT**
+ - **Check design context:** Compare style properties for padding, gap, margin, dimensions
+ - Padding (p-, px-, py-, pt-, pb-, pl-, pr-)
+ - Margin (m-, mx-, my-, mt-, mb-, ml-, mr-)
+ - Gap (gap-, gap-x-, gap-y-)
+ - Dimensions (w-, h-, size-)
+ - **CRITICAL: Rest/Default states need spacing too - don't assume spacing only changes between states**
+ - **CRITICAL: Verify spacing in EVERY variant, including the baseline Rest/Default state**
+ - **CRITICAL: When variant names suggest "Rest", "Empty", or "Default", do NOT assume gap-0 or zero spacing is correct**
+ - **CRITICAL: Values explicitly set to 0 in design context ARE INTENTIONAL - test them**
+ - **CRITICAL: If design context shows padding: 0, this is DIFFERENT from missing padding property**
+ - **CRITICAL: Compare design context values against mcp_figma_get_variable_defs to verify known tokens**
+ - **Add TODO for each spacing value:**
+ ```typescript
+ // TODO (from design context): Test [element] has padding [value]px (from design context style.padding)
+ // TODO (from design context): Test gap is [value]px (from design context style.gap) - verify NOT 0 unless explicitly set
+ // TODO (from design context): Verify design variable [varName] matches value [actualValue] (cross-check with mcp_figma_get_variable_defs)
+ // TODO (from design context): Test spacing changes from [value1]px to [value2]px between variants
+ ```
+
+4. **Overlay/Modal Detection**
+ - Look for Figma node names: "tooltip", "popover", "modal", "menu", "card", "panel", "dropdown"
+ - Look for content appearing above/below/beside trigger
+ - Look for shadows/elevation suggesting layering
+ - **CRITICAL: Hover variants often contain overlays that only show on interaction**
+ - **If State=Hover variant shows additional UI elements, these MUST be implemented as interactive overlays**
+ - **Add TODOs for overlay requirements:**
+ ```typescript
+ // TODO (from screenshots): Test overlay appears on actual mouse hover, not just state prop
+ // TODO (from screenshots): Test overlay has absolute positioning class
+ // TODO (from screenshots): Test overlay has z-index class (z-10, z-20, etc.)
+ // TODO (from screenshots): Test parent has relative class
+ // TODO (from screenshots): Test overlay positioned [top-full/bottom-full/left-full/right-full]
+ // TODO (from screenshots): Test overlay doesn't shift page content
+ // TODO (from screenshots): Implement with React state (useState) for actual hover behavior
+ ```
+
+5. **Typography Changes**
+ - Font size, weight, line height
+ - **Add TODO for typography changes:**
+ ```typescript
+ // TODO (from screenshots): Test font weight changes from [value1] to [value2]
+ ```
+
+6. **Layout Changes**
+ - Flex direction, alignment, justify-content, order
+ - **Add TODO for layout changes:**
+ ```typescript
+ // TODO (from screenshots): Test flex direction changes to [value]
+ ```
+
+**Step 3: Add discovered TODOs to relevant describe blocks**
+
+Update the describe blocks from Phase 2.4 with all visual findings:
+
+```typescript
+// Figma Variant: State=Hover, ShowCount=True
+describe('Hover state with count visible', () => {
+ // TODO (from naming): Test cursor-pointer on interactive elements
+ // TODO (from naming): Test hover: pseudo-classes for color changes
+ // TODO (from naming): Test event handlers (onMouseEnter, onMouseLeave)
+
+ // TODOs added from screenshot analysis:
+ // TODO (from screenshots): Test icon changes from text-gray-400 to text-primary-600
+ // TODO (from screenshots): Test label changes from text-gray-900 to text-primary-600
+ // TODO (from screenshots): Test tooltip overlay appears on hover
+ // TODO (from screenshots): Test tooltip has absolute positioning (absolute top-full left-0 mt-2 z-10)
+ // TODO (from screenshots): Test parent has relative class
+ // TODO (from screenshots): Test tooltip content shows [details from screenshot]
+ // TODO (from screenshots): Verify Colors/Primary/600 design variable used
+});
+```
+
+**π Checkpoint: Design Context and Screenshot Analysis Complete**
+- [ ] `mcp_figma_get_design_context` called for ALL variant nodes
+- [ ] `mcp_figma_get_screenshot` captured for ALL variants
+- [ ] `mcp_figma_get_variable_defs` called to get design tokens
+- [ ] Design context compared between variants to detect:
+ - [ ] Conditional rendering (missing/present nodes)
+ - [ ] Style property differences (colors, spacing, sizing)
+ - [ ] Values explicitly set to 0 (documented as intentional)
+ - [ ] Overlay/absolute positioned nodes
+- [ ] Every describe block updated with visual findings
+- [ ] TODOs marked as "from design context" or "from screenshots"
+- [ ] Conditional rendering documented (what shows/hides)
+- [ ] ALL color changes documented for ALL elements
+- [ ] ALL spacing values documented (including explicit 0 values)
+- [ ] All values cross-checked against design variables
+- [ ] Overlays identified and positioning requirements documented
+
+#### 3.3 Analyze Specification and Application Context (Third Pass)
+
+**Review the feature specification and consider how application state affects the component.**
+
+**Step 1: Review the specification document**
+
+Look at the spec that led you to implement this component:
+- What user stories are being solved?
+- What application states trigger this component to appear/change?
+- What data drives the component?
+
+**Step 2: Identify application state-driven tests**
+
+**Consider these patterns and add TODOs:**
+
+1. **Current User Context** (e.g., `IsCurrentUser=True`, `IsOwner=True`, `IsAuthor=True`)
+ - How does the component behave when viewing own content vs. others' content?
+ - **Add TODOs:**
+ ```typescript
+ // TODO (from spec): Test highlights when current user matches item owner
+ // TODO (from spec): Test different permissions/actions shown for current user
+ // TODO (from spec): Pass matching userId to trigger isCurrentUser state
+ ```
+
+2. **Data Presence** (e.g., `Empty=True/False`, `HasData=True/False`)
+ - How does the component behave with no data?
+ - How does it behave with minimum data?
+ - How does it behave with maximum data?
+ - **Add TODOs:**
+ ```typescript
+ // TODO (from spec): Test empty state with data=[]
+ // TODO (from spec): Test with minimum data (1 item)
+ // TODO (from spec): Test with maximum data (overflow behavior)
+ // TODO (from spec): Test truncation rules ("shows 10 items, then 'and X more'")
+ ```
+
+3. **Application State Changes**
+ - What happens when user performs an action that changes data?
+ - What optimistic updates occur?
+ - **Add TODOs:**
+ ```typescript
+ // TODO (from spec): Test component updates when [application state] changes
+ // TODO (from spec): Test optimistic update when [action] performed
+ ```
+
+4. **Business Logic Rules**
+ - Are there business rules that affect what's shown?
+ - Are there permission checks?
+ - **Add TODOs:**
+ ```typescript
+ // TODO (from spec): Test [element] only shows when [business rule] is met
+ // TODO (from spec): Test permission-based visibility
+ ```
+
+**Step 3: Add spec-driven TODOs to describe blocks**
+
+```typescript
+// Figma Variant: State=Selected
+describe('Selected state', () => {
+ // TODO (from naming): Test selection prop (isSelected={true})
+
+ // TODO (from screenshots): Test background changes to bg-primary-50
+ // TODO (from screenshots): Test icon changes to text-primary-600
+
+ // TODOs added from spec analysis:
+ // TODO (from spec): Test triggers when current user matches item author
+ // TODO (from spec): Pass userId={CURRENT_USER_ID} and currentUserId={CURRENT_USER_ID}
+ // TODO (from spec): Test matches spec requirement: "Highlight current user's items"
+});
+```
+
+**π Checkpoint: Spec-based TODOs complete**
+- [ ] Specification document reviewed
+- [ ] Current user context tests identified
+- [ ] Data presence edge cases identified
+- [ ] Application state change tests identified
+- [ ] Business logic tests identified
+- [ ] All spec-driven TODOs added to describe blocks
+
+#### 3.4 Create Comprehensive Tracking Table
+
+**Now consolidate ALL TODOs from naming, screenshots, and spec into a tracking table.**
+
+**Template:**
+```
+| Source | Variant Properties | Element/Area | Difference Type | Before (Baseline) | After (Variant) | Design Variable | Test Description |
+```
+
+**Example tracking table with all sources:**
+```
+| Source | Variant Properties | Element | Difference Type | Before | After | Design Variable | Test Description |
+|--------|-------------------|---------|-----------------|--------|-------|-----------------|------------------|
+| Naming | State=Hover | Button | Interaction | - | cursor-pointer | - | "has cursor-pointer class" |
+| Naming | State=Hover | Button | Event Handler | - | onMouseEnter/Leave | - | "wires hover event handlers" |
+| Design Context | State=Hover | Icon | Color | text-gray-400 | text-primary-600 | Colors/Primary/600 | "changes icon color to primary-600 on hover" |
+| Design Context | State=Hover | Label | Color | text-gray-900 | text-primary-600 | Colors/Primary/600 | "changes label color to primary-600 on hover" |
+| Design Context | State=Hover | Tooltip | Conditional Render | hidden (no node) | visible (node present) | - | "displays tooltip on hover" |
+| Design Context | State=Hover | Tooltip | Positioning | - | absolute top-full left-0 mt-2 z-10 | - | "positions tooltip below without shifting content" |
+| Spec | State=Selected | Background | Color | transparent | bg-primary-50 | Colors/Primary/50 | "applies background when current user" |
+| Spec | State=Selected | Props | Application State | - | userId matches currentUserId | - | "triggers selection when user IDs match" |
+| Design Context | ShowCount=False | Count Badge | Conditional Render | visible (node present) | hidden (node missing) | - | "hides count badge" |
+| Design Context | ShowCount=False | Gap | Spacing | gap-2 (8px) | gap-1 (4px) | Spacing/xs | "reduces gap to 4px when count hidden" |
+| Design Context | State=Rest | Container | Spacing | - | gap-4 (16px) | Spacing/md | "maintains 16px gap even with no content" |
+| Naming | State=Disabled | Button | Cursor | cursor-pointer | cursor-not-allowed | - | "shows not-allowed cursor" |
+| Design Context | State=Disabled | Text | Color | text-primary-600 | text-gray-400 | Colors/Neutral/Gray 400 | "applies gray color when disabled" |
+```
+
+**Note:** "Design Context" as source means the value was verified using `mcp_figma_get_design_context` and cross-checked against `mcp_figma_get_variable_defs`.
+
+**Group by variant combination** - each variant gets a section in the tracking table with all its test requirements from all three sources.
+
+**π CRITICAL: This tracking table is your complete test specification**
+- Each row will become a unit test in Phase 5
+- No row should be missing
+- Each row should reference its source (naming/screenshots/spec)
+
+**π Checkpoint: Tracking table complete**
+- [ ] Tracking table includes TODOs from naming analysis (Phase 3.1)
+- [ ] Tracking table includes TODOs from screenshot analysis (Phase 3.2)
+- [ ] Tracking table includes TODOs from spec analysis (Phase 3.3)
+- [ ] Each row has: source, variant properties, element, difference type, before/after values, design variable, test description
+- [ ] Table is grouped by variant combination for clarity
+
+---
+
+### Phase 4: Value Verification
+
+**MCP Tools Used:**
+- `mcp_figma_get_variable_defs` - Get all design tokens/variables
+- `mcp_figma_get_design_context` - Verify actual values match expected variables
+
+**Skills Referenced:**
+- `mcp-error-handling` - Handle MCP tool errors during value verification
+
+#### 4.1 Get Design Variables
+Call `mcp_figma_get_variable_defs` on the Component Set node:
+- Returns all design tokens/variables used in this component
+- Track: colors, spacing, typography, dimensions
+- **This is your source of truth for what values SHOULD be used**
+
+Example variable response:
+```json
+{
+ "Spacing/xs": "4px",
+ "Spacing/sm": "8px",
+ "Spacing/md": "16px",
+ "Colors/Primary/600": "#0066cc",
+ "Colors/Neutral/Gray 400": "#949494"
+}
+```
+
+#### 4.2 Cross-Reference Design Context Values Against Variables
+For EACH variant from Phase 3.2:
+- Review the design context style properties you captured
+- Match each value to a variable from `mcp_figma_get_variable_defs`
+- **CRITICAL: If a value in design context doesn't match any variable, investigate:**
+ - Is it a custom/arbitrary value? (acceptable but note in tracking table)
+ - Is it calculated/derived? (e.g., gap = spacing/xs + spacing/sm)
+ - Is it a mistake in the design? (flag for designer review)
+
+**Verification process:**
+```typescript
+// Example: Verify gap value from design context matches a known variable
+const hoverContext = await mcp_figma_get_design_context({ nodeId: "1:3" });
+const variables = await mcp_figma_get_variable_defs({ nodeId: "1:2" });
+
+// hoverContext shows: style.gap = "16px"
+// variables shows: "Spacing/md" = "16px"
+// β
MATCH - document in tracking table as "Spacing/md (16px)"
+
+// If hoverContext shows: style.gap = "18px"
+// But NO variable equals 18px
+// β MISMATCH - document as "18px (custom, no variable)" and flag
+```
+
+#### 4.3 Verify Values Explicitly Set to Zero
+**CRITICAL: Zero values are intentional design decisions - test them!**
+
+- Review design context for properties set to 0: `padding: 0`, `gap: 0`, `margin: 0`
+- **A property set to 0 is DIFFERENT from a property that is missing/undefined**
+- **If design context shows `gap: 0`, this means designer wants NO gap - not that gap is unset**
+- **Add explicit tests for these zero values:**
+ ```typescript
+ // TODO (from design context): Test gap is 0px when State=Rest (explicitly set in design)
+ // TODO (from design context): Test padding is 0px on inner element (design intent, not missing)
+ ```
+
+#### 4.4 Update Tracking Table with Verified Values
+Go back to your Phase 3.4 tracking table and enhance it:
+- Add exact variable names in "Design Variable" column
+- Mark custom values that don't match any variable
+- Mark zero values as explicitly verified
+- Flag any mismatches for designer review
+
+**Enhanced tracking table:**
+```
+| Source | Variant Properties | Element | Difference Type | Before | After | Design Variable | Verified | Test Description |
+|--------|-------------------|---------|-----------------|--------|-------|-----------------|----------|------------------|
+| Design Context | State=Hover | Icon | Color | text-gray-400 | text-primary-600 | Colors/Primary/600 | β
| "changes icon to primary-600 on hover" |
+| Design Context | ShowCount=False | Gap | Spacing | gap-2 (8px) | gap-1 (4px) | Spacing/xs | β
| "reduces gap to 4px when count hidden" |
+| Design Context | State=Rest | Gap | Spacing | - | gap-4 (16px) | Spacing/md | β
EXPLICIT | "has 16px gap even with no content" |
+| Design Context | State=Empty | Padding | Spacing | padding-2 | padding-0 (0px) | ZERO VALUE | β
EXPLICIT | "has 0px padding when empty (design intent)" |
+| Design Context | Icon | Size | Dimension | - | 18px | CUSTOM | β οΈ NO VAR | "icon is 18px (custom, no variable)" |
+```
+
+#### 4.5 Document Conditional Values
+For values that change between variants:
+- Note which variant combinations trigger each value in your tracking table
+- Each conditional value becomes a test assertion
+- Example: "Gap changes from 4px to 8px when Size=Large"
+- Example: "Text color changes to Gray 400 when State=Disabled"
+
+### Phase 5: Test Generation
+
+**MCP Tools Used:**
+- None - this phase uses the data gathered from previous MCP tool calls
+
+**Skills Referenced:**
+- `cross-package-types` - Reference when tests involve shared types from @carton/shared
+
+**Note:** Test structure with TODO placeholders was created in Phase 2.4. Now we fill in the actual test implementations using the verified values from Phase 4.
+
+#### 5.1 Map Variants to Application/Interaction States
+Determine how each Figma variant maps to real-world usage:
+
+**Application States** (data-driven, priority 1):
+- Empty state: No data, zero values
+- Minimum data: Smallest valid content
+- Maximum data: Largest valid content, overflow behavior
+- Current user: Logged-in user is subject of component
+- Selected: Component is currently selected/active in UI
+
+**User Interaction States** (event-driven, priority 2):
+- Focus: Component has keyboard focus
+- Hover: Mouse hovering over component
+- Active: Component is being clicked/pressed
+- Visited: Component has been interacted with before
+
+**Mapping Rules**:
+1. One Figma variant β ONE state (no reuse)
+2. Prioritize application states over interaction states
+3. If unclear, keep the TODO and note ambiguity
+
+Example mapping:
+```
+Figma Variant: State=Empty β Application: Empty state (no data)
+Figma Variant: State=Default β Application: Minimum data
+Figma Variant: State=Selected β Application: Selected
+Figma Variant: State=Hover β Interaction: Hover
+```
+
+#### 5.2 Generate Tests from Tracking Table
+
+**Each row in your Phase 3 tracking table becomes a unit test inside the appropriate describe block created in Phase 2.4.**
+
+**Process:**
+1. Take each row from your tracking table
+2. Identify which describe block it belongs to (based on variant properties)
+3. Replace the TODO comment with actual test implementation
+4. Determine the test type based on "Difference Type" column
+5. Use the variant properties to determine how to trigger the state (prop values or user interaction)
+6. Write test using the "Test Description" as the test name
+7. **Add mandatory `// Figma Variant:` comment above EACH individual test** with ALL variant properties and values
+
+**Critical Comment Requirements:**
+- **Describe blocks** get comments indicating the variant group: `// Figma Variant: State=Hover`
+- **Individual tests** ALSO get comments with COMPLETE variant info: `// Figma Variant: State=Hover, ShowCount=True`
+- This creates clear traceability from Figma β describe block β specific test
+
+**Conversion Examples:**
+
+**Tracking Table Row:**
+```
+| State=Hover | Icon | Color | text-gray-400 | text-primary-600 | Colors/Primary/600 | "changes icon to primary-600 on hover" |
+```
+
+**Generated Test (placed inside the `// Figma Variant: State=Hover` describe block):**
+```typescript
+// Figma Variant: State=Hover (describe block level)
+describe('Hover state', () => {
+ // Figma Variant: State=Hover, ShowCount=True (individual test level)
+ it('changes icon color to primary-600 on hover', async () => {
+ render();
+ const icon = screen.getByTestId('component-icon');
+
+ // Before hover - baseline color
+ expect(icon).toHaveClass('text-gray-400');
+
+ // After hover - variant color
+ await userEvent.hover(icon);
+ expect(icon).toHaveClass('hover:text-primary-600');
+ });
+});
+```
+
+---
+
+**Test Patterns by Difference Type:**
+
+**Pattern Structure: Dual-level variant comments**
+```typescript
+// Describe block gets high-level variant grouping
+// Figma Variant: State=Empty
+describe('Empty state', () => {
+
+ // Each individual test gets FULL variant details
+ // Figma Variant: State=Empty, ShowCount=False
+ it('hides the icon when count is zero', () => {
+ render();
+ expect(screen.queryByTestId('component-icon')).not.toBeInTheDocument();
+ });
+
+ // Figma Variant: State=Empty, ShowCount=True
+ it('shows placeholder when no data but showCount is true', () => {
+ render();
+ expect(screen.getByText('0')).toBeInTheDocument();
+ });
+});
+```
+
+**1. Conditional Rendering Tests**
+```typescript
+// Figma Variant: State=Empty
+describe('Empty state', () => {
+ // Figma Variant: State=Empty, ShowCount=False
+ it('hides the icon when State=Empty', () => {
+ render();
+ expect(screen.queryByTestId('component-icon')).not.toBeInTheDocument();
+ });
+});
+
+// Figma Variant: State=Default
+describe('Default state', () => {
+ // Figma Variant: State=Default, ShowCount=True
+ it('shows the icon when State=Default', () => {
+ render();
+ expect(screen.getByTestId('component-icon')).toBeInTheDocument();
+ });
+});
+```
+
+**2. Visual Property Tests**
+```typescript
+// Figma Variant: State=Disabled
+describe('Disabled state', () => {
+ // Figma Variant: State=Disabled, ShowCount=True
+ it('applies gray text color when State=Disabled', () => {
+ render();
+ const element = screen.getByTestId('component-text');
+ expect(element).toHaveClass('text-neutral-400');
+ });
+});
+
+// Figma Variant: Size=Small
+describe('Small size variant', () => {
+ // Figma Variant: Size=Small, State=Default
+ it('applies 4px padding when Size=Small', () => {
+ render();
+ const element = screen.getByTestId('component-container');
+ expect(element).toHaveClass('p-1'); // p-1 = 4px in Tailwind
+ });
+});
+```
+
+**3. Hover Interaction Tests**
+```typescript
+// Figma Variant: State=Hover (compare to State=Rest for differences)
+describe('Hover state', () => {
+ // Figma Variant: State=Hover, ShowCount=True
+ it('changes icon color to primary-600 on hover', async () => {
+ render();
+ const icon = screen.getByTestId('status-icon');
+
+ // Rest state - icon is gray-400
+ expect(icon).toHaveClass('text-gray-400');
+
+ // Hover state - icon becomes primary-600
+ await userEvent.hover(icon);
+ expect(icon).toHaveClass('hover:text-primary-600');
+ });
+
+ // Figma Variant: State=Hover, ShowCount=True (tooltip appears)
+ it('displays tooltip with details on hover', async () => {
+ render();
+ const button = screen.getByTestId('status-button');
+
+ // Tooltip not visible initially
+ expect(screen.queryByTestId('status-tooltip')).not.toBeInTheDocument();
+
+ // Hover shows tooltip
+ await userEvent.hover(button);
+ expect(screen.getByTestId('status-tooltip')).toBeInTheDocument();
+ expect(screen.getByText('Item A, Item B, Item C')).toBeInTheDocument();
+ });
+});
+```
+
+**4. Selection/Active State Tests**
+```typescript
+// Figma Variant: State=Selected (compare to State=Rest for differences)
+describe('Selected state', () => {
+ // Figma Variant: State=Selected, ShowCount=True
+ it('applies primary background when selected', () => {
+ render();
+ const button = screen.getByTestId('status-button');
+
+ // Selected state - background is primary-50, icon is primary-600
+ expect(button).toHaveClass('bg-primary-50');
+ expect(screen.getByTestId('status-icon')).toHaveClass('text-primary-600');
+ });
+});
+
+// Figma Variant: State=Rest (no selection)
+describe('Rest state', () => {
+ // Figma Variant: State=Rest, ShowCount=True
+ it('applies neutral background when not selected', () => {
+ render();
+ const button = screen.getByTestId('status-button');
+
+ // Rest state - no background, icon is gray-400
+ expect(button).not.toHaveClass('bg-primary-50');
+ expect(screen.getByTestId('status-icon')).toHaveClass('text-gray-400');
+ });
+});
+```
+
+// Figma Variant: State=Empty β Application State: Empty (no data)
+it('displays empty state message when data is empty', () => {
+ render();
+ expect(screen.getByText('No items available')).toBeInTheDocument();
+});
+
+// Figma Variant: State=Selected β Application State: Current user is selected expect(screen.getByText('No items available')).toBeInTheDocument();
+});
+
+it('applies selected styling when item is current user', () => {
+ render();
+ expect(screen.getByTestId('component-root')).toHaveClass('bg-primary-100');
+});
+```
+
+// Figma Variant: State=Hover β Interaction State: Hover
+it('applies hover styles on mouse enter', async () => {
+ render();
+ const element = screen.getByTestId('component-button');
+
+ await userEvent.hover(element);
+ expect(element).toHaveClass('bg-primary-600');
+});
+
+// Figma Variant: State=Focus β Interaction State: Focus expect(element).toHaveClass('bg-primary-600');
+});
+
+it('applies focus styles when focused', () => {
+ render();
+ const element = screen.getByTestId('component-input');
+
+ element.focus();
+ expect(element).toHaveClass('ring-2 ring-primary-500');
+});
+```
+
+**5. Element Ordering Tests**
+```typescript
+it('renders elements in correct order matching Figma hierarchy', () => {
+ render();
+ const container = screen.getByTestId('component-root');
+ const children = Array.from(container.children);
+
+ expect(children[0]).toHaveAttribute('data-testid', 'component-icon');
+ expect(children[1]).toHaveAttribute('data-testid', 'component-label');
+ expect(chilImplementation Phase (Finally Write Component)
+
+#### 6.1 Create Component File
+**NOW** create the component implementation:
+- Place component file adjacent to test file (same directory pattern discovered in Phase 1)
+- Typically: `[ComponentName].tsx` in same folder as `[ComponentName].test.tsx`
+- Start with minimal implementation to make first test pass
+- Incrementally add features to pass each test
+
+#### 6.2 Red-Green-Refactor Cycle
+1. **Red**: Run tests - they should fail (no component yet)
+2. **Green**: Write minimal component code to pass tests
+3. **Refactor**: Improve implementation while keeping tests passing
+4. Repeat for each variant/combination
+
+#### 6.3 Verify All Tests Pass
+- Run full test suite
+- All variant combinations should pass
+- Component should match Figma exactly
+
+---
+
+### Phase 7: Red-Green-Refactor Cycle
+
+**MCP Tools Used:**
+- None - this phase focuses on running tests and iterating on component code
+
+**Note:** Test file structure was created in Phase 2.4 with describe blocks and TODO placeholders. By this phase, TODOs should be replaced with actual test implementations.
+
+#### 7.1 Run Tests and Iterate
+#### 5.4 Handle Ambiguous Cases
+When variant purpose is unclear after analysis, leave the TODO placeholder with detailed notes:
+```typescript
+// Figma Variant: State=Special, Size=Large
+describe('Special state with large size', () => {
+ // TODO: Add unit tests for State=Special, Size=Large
+ // UNCLEAR: How this variant maps to application or interaction state
+ // NEXT STEPS: Review Figma documentation or consult with designer
+ // TRACKING: No clear visual differences identified in Phase 3 analysis
+});
+```
+
+#### 5.4 VALIDATION CHECKPOINT (Before Writing Component)
+**π STOP - Complete this checklist AND run validation script before proceeding:**
+
+**Phase 2.4 Test Structure Completeness:**
+- [ ] Describe blocks created for EVERY variant property/value combination from Phase 2.3
+- [ ] EVERY describe block has a `// Figma Variant: Property=Value` comment
+- [ ] TODO placeholders replaced with actual test implementations
+- [ ] Comment format is consistent across all describe blocks
+- [ ] No variant combinations are missing from the test file structure
+
+**Phase 3 Tracking Table Completeness:**
+- [ ] Tracking table created with ALL variant combinations documented
+- [ ] EVERY visual difference between variants is a row in the table
+- [ ] Each row includes: variant properties, element, difference type, before/after values, design variable
+- [ ] Conditional rendering tracked (what appears/disappears)
+- [ ] Color changes tracked for ALL elements (not just container)
+- [ ] Spacing changes tracked (padding, margin, gap, dimensions)
+- [ ] Interactive states analyzed (hover, selection, disabled)
+- [ ] Overlay/tooltip content documented if present
+- [ ] π Overlay positioning analyzed (absolute vs relative, top-full/bottom-full/left-full/right-full)
+- [ ] π Icon vs. count conditional rendering analyzed separately (icons always show?)
+- [ ] π Hover state includes BOTH color changes AND overlay appearance if applicable
+
+**Overlay/Modal-Like Elements (if detected):**
+- [ ] π Overlay elements identified using Figma naming patterns ("tooltip", "popover", "modal", "menu", "card", "panel", "detail", "dropdown")
+- [ ] π Overlay positioning documented in tracking table (e.g., `absolute top-full left-0 mt-2 z-10`)
+- [ ] π Parent container positioning documented (needs `relative` class for positioning context)
+- [ ] π Z-index layering documented for proper stacking order (`z-10`, `z-20`, `z-50`, etc.)
+- [ ] π Spacing between trigger and overlay documented (e.g., `mt-2` for 8px gap, `mb-1` for 4px gap)
+
+**Test Coverage from Tracking Table:**
+- [ ] EVERY row in tracking table has a corresponding unit test
+- [ ] EVERY test has a comment with ALL variant properties and values
+- [ ] Example format used: `// Figma Variant: State=Rest, Count=True`
+- [ ] Describe blocks also include variant info in name or comment
+
+**Interactive State Testing:**
+- [ ] Interaction states tested with appropriate user events (e.g., `userEvent.hover()` for State=Hover, `.focus()` for State=Focus)
+- [ ] Selection/active states tested with appropriate props (if State=Selected/Active/Current variant exists)
+- [ ] Tests verify BEFORE and AFTER values from tracking table
+- [ ] Visual feedback changes tested (icon colors, text colors, backgrounds, borders) in interaction states
+- [ ] Selection state highlighting tested (background, border, text color changes)
+
+**Overlay Testing (if applicable):**
+- [ ] If interaction variant shows overlay β test includes overlay rendering
+- [ ] Overlay content is tested (lists, detail text, truncation rules like "and X more")
+- [ ] Overlay positioning is tested (verify element has `data-testid`)
+- [ ] π Overlay has `absolute` or `fixed` position class tested
+- [ ] π Overlay has appropriate `z-index` class tested
+- [ ] π Parent container has `relative` class tested (provides positioning context)
+- [ ] π Test verifies overlay doesn't shift page content (absolute positioning removes from flow)
+
+**Conditional Display Testing (if empty/zero states exist):**
+- [ ] π Analyzed which specific elements hide vs. remain visible in empty states
+- [ ] π Separate tests for structural elements that always show (affordances: buttons, icons, containers)
+- [ ] π Separate tests for data displays that conditionally show (counts, labels, content when data exists)
+- [ ] π Don't hide structural/affordance elements - only data displays should be conditional
+
+**Value Precision:**
+- [ ] ALL colors matched to `mcp_figma_get_variable_defs` results (no guessing)
+- [ ] ALL spacing matched to design variables (no px approximations)
+- [ ] Conditional values documented in tracking table ("changes from X to Y when State=Z")
+- [ ] Test assertions use exact values from tracking table
+
+**π MANDATORY: Run Validation Script**
+
+Before proceeding to Phase 6 (component implementation), you MUST run the validation script:
+
+```bash
+.github/skills/component-set-test-driven-development/scripts/validate-tests.sh \
+ path/to/ComponentName.test.tsx
+```
+
+**Expected output:**
+```
+β Variant comment coverage: 100%
+β Hover state coverage: Present with userEvent.hover()
+β Selection state coverage: Present with appropriate props
+β Tooltip/overlay coverage: Present
+β Value precision: All values from design variables
+
+All checks passed! β
+```
+
+**If the script fails:**
+- Read the error messages carefully
+- Fix the identified issues in the test file
+- Re-run the script until it passes
+- DO NOT proceed to component implementation until validation passes
+
+**If ANY manual checklist item is unchecked OR the validation script fails:**
+- Return to Phase 3 or Phase 5 and fix before proceeding
+- **DO NOT create the component file**
+- **DO NOT implement any component code**
+
+---
+
+### Phase 6: Component Implementation
+
+**MCP Tools Used:**
+- `mcp_figma_get_screenshot` - Reference visual design during implementation
+- `mcp_figma_get_design_context` - Reference exact styles and structure during coding
+
+**Skills Referenced:**
+- `implementation-validation` - MANDATORY validation before marking complete
+- `tailwind-utility-simplification` - Simplify Tailwind classes using standard spacing scale
+- `mcp-error-handling` - Handle MCP tool errors when fetching additional context
+- `cross-package-types` - Import and use types from @carton/shared if needed
+
+**β οΈ CRITICAL: Only reach this phase AFTER Phase 5.4 validation is 100% complete**
+
+#### 6.1 Create Component File
+**NOW** create the component implementation:
+- Place component file adjacent to test file
+- Typically: `[ComponentName].tsx` in same folder as `[ComponentName].test.tsx`
+- Start with minimal implementation to make first test pass
+
+#### 6.2 Critical Implementation Requirements
+
+**β οΈ MOST COMMONLY MISSED - Check EVERY one:**
+
+**Spacing:**
+- [ ] Review Phase 3 tracking table for ALL spacing values
+- [ ] Apply padding (p-, px-, py-) from Figma Auto Layout
+- [ ] Apply gap (gap-) to flex/grid containers
+- [ ] Apply dimensions (w-, h-, size-) to sized elements
+
+**Interactive States:**
+- [ ] Add `cursor-pointer` to clickable elements
+- [ ] Add `hover:` classes for ALL child elements (icon + text)
+- [ ] Wire up `onClick`/`onPress` handlers
+- [ ] Add `transition-colors` for smooth hover effects
+
+**Hover Pattern:**
+```tsx
+// β BAD - icon doesn't change
+
Text
+
+// β
GOOD - all children change
+const color = showHover ? 'text-teal-600' : 'text-gray-700';
+ setShowHover(true)}>
+ Text
+
+```
+
+**Overlays:**
+- [ ] Use `absolute` positioning (NOT relative)
+- [ ] Parent has `relative` class
+- [ ] Has `z-index` class (z-10, z-20)
+- [ ] Directional positioning (top-full, bottom-full)
+
+#### 6.3 Post-Implementation Validation
+
+**π MANDATORY before marking complete:**
+1. Run implementation-validation skill (`.github/skills/implementation-validation/SKILL.md`)
+2. Manual browser test: hover, click, spacing, colors
+3. Run tests: `npm test -- ComponentName.test.tsx` (all pass β
)
+4. Visual comparison: component vs. Figma screenshot
+5. Re-run validation script: `.github/skills/component-set-test-driven-development/scripts/validate-tests.sh`
+
+**Common mistakes:**
+- β Missing padding
+- β Missing cursor-pointer
+- β Hover only on parent, not children
+- β Missing onClick wiring
+
+---
+
+### Phase 7: Test File Organization
+
+**MCP Tools Used:**
+- None - this phase focuses on test organization and documentation
+
+**Skills Referenced:**
+- None - pure test organization
+
+#### 6.1 File Structure
+Create/update test file: `ComponentName.test.tsx`
+
+```typescript
+import { render, screen } from '@testing-library/react';
+import userEvent from '@testing-library/user-event';
+import { ComponentName } from './ComponentName';
+
+describe('ComponentName', () => {
+ // Figma Variant: State=Rest
+ describe('Rest State', () => {
+ // Figma Variant: State=Rest, Count=False
+ it('renders with neutral styling when no data', () => { /* ... */ });
+
+ // Figma Variant: State=Rest, Count=True
+ it('shows count with neutral styling', () => { /* ... */ });
+ });
+
+ // Figma Variant: State=Hover
+ describe('Hover State', () => {
+ // Figma Variant: State=Hover, Count=True
+ it('highlights icon on hover', async () => {
+ const user = userEvent.setup();
+ render();
+ await user.hover(screen.getByTestId('component-icon'));
+ expect(screen.getByTestId('component-icon')).toHaveClass('text-primary-600');
+ });
+
+ // Figma Variant: State=Hover (tooltip visible)
+ it('displays tooltip with details on hover', async () => {
+ const user = userEvent.setup();
+ render();
+ await user.hover(screen.getByTestId('component-button'));
+ expect(screen.getByText('Item A, Item B')).toBeInTheDocument();
+ });
+ });
+
+ // Figma Variant: State=Selected
+ describe('Selected State', () => {
+ // Figma Variant: State=Selected, Count=True
+ it('applies primary background when selected', () => {
+ render();
+ expect(screen.getByTestId('component-button')).toHaveClass('bg-primary-50');
+ });
+ });
+
+ describe('Element Ordering', () => {
+ // Figma Variant: All variants (structural test)
+ it('renders icon before count badge', () => { /* ... */ });
+ });
+});
+```
+
+#### 6.2 Storybook Stories
+Create visual regression stories for each variant:
+
+```typescript
+import type { Meta, StoryObj } from '@storybook/react';
+import { ComponentName } from './ComponentName';
+
+const meta: Meta = {
+ title: 'Components/ComponentName',
+ component: ComponentName,
+};
+ (Test-Driven Development)
+
+```mermaid
+flowchart TD
+ A[Receive Figma URL in Spec] --> B[Extract fileKey and nodeId]
+ B --> C[Call get_design_context]
+ C --> D{Is COMPONENT_SET?}
+ D -->|No| Z[Use regular implementation flow]
+ D -->|Yes| E{Component exists?}
+ E -->|Yes| F[Audit existing tests]
+ E -->|No| G[β
START TDD PROCESS]
+ G --> H[Create TEST file first]
+ H --> I[Parse componentPropertyDefinitions]
+ I --> J[Generate all variant combinations]
+export const HoverMedium: Story = {
+ args: { state: 'hover', size: 'medium' },
+ parameters: { pseudo: { hover: true } },
+};
+
+export const DisabledLarge: Story = {
+ args: { state: 'disabled', size: 'large' },
+};
+```
+
+## Workflow Summary
+
+```mermaid
+flowchart TD
+ A[Receive Figma URL in Spec] --> B[Extract fileKey and nodeId]
+ B --> C[Call get_design_context]
+ C --> D{Is COMPONENT_SET?}
+ D -->|No| Z[Use regular implementation flow]
+ D -->|Yes| E{Component exists?}
+ E -->|Yes| F[Audit existing tests]
+ E -->|No| G[β
START TDD PROCESS]
+ G --> H[Create TEST file first]
+ H --> I[Parse componentPropertyDefinitions]
+ I --> J[Generate all variant combinations]
+ J --> K2[π Phase 2.4: Create describe blocks with TODO placeholders]
+ K2 --> L1[Phase 3.1: Analyze variant NAMES β add TODOs]
+ L1 --> L2[Phase 3.2: Capture SCREENSHOTS β add TODOs]
+ L2 --> L3[Phase 3.3: Analyze SPEC/CONTEXT β add TODOs]
+ L3 --> L4[Phase 3.4: Create tracking table from ALL TODOs]
+ L4 --> M[Phase 4: Call get_variable_defs]
+ M --> N[Match ALL values in table to design variables]
+ N --> O[Phase 5: Generate tests - fill in TODOs]
+ O --> P{Tests written?}
+ P -->|No| O
+ P -->|Yes| P2{Validate-tests.sh passes?}
+ P2 -->|No| O
+ P2 -->|Yes| Q[Run tests - confirm they fail]
+ Q --> R[Phase 6: NOW create component file]
+ R --> S[Implement minimal code to pass first test]
+ S --> T[Red-Green-Refactor for each test]
+ T --> U[All tests passing?]
+ U -->|No| T
+ U -->|Yes| W[Component complete]
+```
+
+## Critical Rules - **π TESTS FIRST** - Component file does NOT exist until Phase 6. Tests are written in Phase 2-5.
+2. **π EVERY TEST NEEDS VARIANT COMMENTS** - Format: `// Figma Variant: Property=Value` above each test.
+3. **π CREATE DESCRIBE BLOCKS EARLY** - Phase 2.4 MUST create describe blocks with variant comments and TODO placeholders for EVERY variant combination immediately after discovering variants, BEFORE visual analysis.
+4. **π CREATE TRACKING TABLE** - Phase 3 MUST produce a structured table documenting EVERY difference between variants. Each row becomes a test.
+5. **π RUN VALIDATION SCRIPT** - After test generation (Phase 5), BEFORE implementation (Phase 6), run `.github/skills/component-set-test-driven-development/scripts/validate-tests.sh`. Script MUST pass with 100% success.
+6. **Check if Component Set** - Call `mcp_figma_get_design_context` to verify `document.type === "COMPONENT_SET"` before proceeding with this workflow.
+7. **Track ALL visual differences** - Compare each variant to baseline, document colors, spacing, conditional rendering, layout changes. Don't miss icon color changes, tooltips, or selection states.
+8. **Verify ALL values against variables** - No assumptions or estimations. Use `mcp_figma_get_variable_defs` for every color, spacing, and dimension value.
+9. **β οΈ VALIDATE BEFORE IMPLEMENTING** - Complete validation checkpoint (Phase 5.4) after test generation, before writing any component code.
+
+## Failure Recovery
+
+**If you proceed to implementation without following this skill:**
+
+The implementation WILL have critical defects:
+- β Click/interaction handlers won't work correctly
+- β Spacing will be completely wrong
+- β Hover behavior will be broken
+- β Overlays will shift page content instead of floating above
+- β Component won't match Figma design
+
+**Recovery process:**
+1. STOP all implementation work immediately
+2. Delete the component implementation file (keep tests if they exist)
+3. Start over from Phase 0 of this skill
+4. Create tracking table from Figma variants
+5. Write tests from tracking table
+6. Run validation script until it passes
+7. Only then implement component
+
+**Prevention:**
+- Always use figma-url-router skill when you see ANY Figma URL
+- Let the router detect the node type and route you to this skill
+- Trust the process - TDD takes longer upfront but saves debugging time
+2.4 MUST create describe blocks with variant comments and TODO placeholders immediately after discovering variant
+## Common Mistakes to Avoid
+
+β **CRITICAL: Don't write component code first** - Tests must come before implementation
+β **CRITICAL: Don't skip TODO placeholder step** - Phase 5.1 MUST create describe blocks with variant comments and TODO placeholders BEFORE writing test implementations
+β **CRITICAL: Don't skip variant comments** - EVERY test AND describe block must have `// Figma Variant: Property=Value` comment
+β **CRITICAL: Don't skip the tracking table** - Phase 3 MUST create a structured table of ALL differences
+β **CRITICAL: Don't miss overlay positioning** π - Overlays must use `absolute` positioning to prevent content shift on page
+β **CRITICAL: Don't conditionally hide structural elements** π - Visual affordances (icons, buttons) show even when empty; data displays (counts, labels) are conditional
+β **CRITICAL: Don't test only overlay appearance** π - Also test color/style changes on trigger elements in interaction states
+β **Don't skip the Component Set check** - Always verify with `mcp_figma_get_design_context` first
+β **Don't assume variant purposes without checking screenshots**
+β **Don't create tests for states not shown in Figma**
+β **Don't skip conditional value verification** - Use `mcp_figma_get_variable_defs` for ALL colors and spacing
+β **Don't forget to test element ordering and placement**
+β **Don't batch variant testing** - Each difference needs a dedicated test
+β **Don't ignore interaction state color changes** - Compare interaction variant to baseline for ALL element colors
+β **Don't forget selection highlighting** - Test background, border, icon color changes for selected state
+β **Don't skip overlay content testing** - If interaction variant shows additional UI, test the content
+β **Don't guess at spacing values** - Always use tracked design variables
+β **Don't skip the validation checkpoint** - Complete checklist before writing component
+β **Don't compare variants in isolation** - Always compare against a baseline to identify what changed
+β **Don't track only the container** - Check ALL child elements for color/spacing changes
+β **Don't use relative/static positioning for overlays** π - Overlays need `absolute` or `fixed` to prevent document flow disruption
+β **Don't assume entire sections are conditional** π - Compare screenshots element-by-element to identify exactly what hides
+β **Don't forget z-index for overlays** π - Floating UI needs `2.4 immediately after discovering all variant combinyering above other content
+
+β
**Do** check if node is a Component Set before starting
+β
**Do** write failing tests FIRST, then create component
+β
**Do** create describe blocks with TODO placeholders in Phase 5.1 BEFORE writing test implementations
+β
**Do** add variant comments to EVERY test and EVERY describe block
+β
**Do** create a comprehensive tracking table in Phase 3
+β
**Do** compare each variant to a baseline variant
+β
**Do** verify every value against `mcp_figma_get_variable_defs`
+β
**Do** run tests to confirm they fail (red phase)
+β
**Do** track what becomes hidden/visible (conditional rendering)
+β
**Do** track color changes for ALL elements (not just container)
+β
**Do** compare interaction variant to rest variant for all differences
+β
**Do** test visual feedback changes (colors, backgrounds, borders) in interaction states
+β
**Do** test overlay/tooltip content and positioning for interaction states
+β
**Do** test selection highlighting (background, border, text)
+β
**Do** generate one test per tracking table row
+β
**Do** complete validation checkpoint before implementation
+β
**Do** create Storybook stories for visual regression
+β
**Do** test both rendering and behavior for each variant
+β
**Do** check screenshots to understand visual differences
+β
**Do** document unclear mappings with TODO tests
+β
**Do** follow Red-Green-Refactor cycle
+β
**Do** identify overlays using Figma naming patterns π - Look for "tooltip", "popover", "modal", "menu", "card", "panel" in node names
+β
**Do** use `absolute` positioning for overlays π - Parent gets `relative`, overlay gets `absolute` with directional classes
+β
**Do** test overlay positioning classes π - Verify `top-full`/`bottom-full`, alignment (`left-0`/`right-0`), and `z-index`
+β
**Do** distinguish affordances from data displays π - Structural elements (actions, controls) always visible; data displays conditional
+β
**Do** test BOTH visual feedback AND overlays in interaction states π - Interaction can trigger multiple simultaneous changes
+β
**Do** use Figma naming as positioning hints π - Words like "below", "above", "floating", "overlay" indicate absolute positioning requirements
+
+## Example: Complete TDD Workflow
+
+Given Figma URL: `https://figma.com/design/abc123/DesignFile?node-id=42-1`
+
+### Step 0: Verify Component Set (FIRST!)
+- fileKey: `abc123`
+- nodeId: `42:1`
+- Call `mcp_figma_get_design_context(nodeId: "42:1", fileKey: "abc123")`
+- Response shows `document.type: "COMPONENT_SET"` β
+- This IS a Component Set β Proceed with this skill
+
+### Step 1: Setup (NO component file yet)
+- Search for `Badge` in codebase β Not found β
+- Search for existing test files: `file_search(**/*.test.tsx)` β Find pattern: `src/components/[Name]/[Name].test.tsx`
+- **DO NOT create Badge.tsx yet**
+- Create `src/components/Badge/` directory
+- Create `src/components/Badge/Badge.test.tsx` (test file only)
+
+### Step 2: Discover Variants
+Call `mcp_figma_get_design_context(nodeId: "42:1", fileKey: "abc123")`
+
+Response shows:
+```json
+{
+ "componentPropertyDefinitions": {
+ "State": {
+ "variantOptions": ["Default", "Selected", "Disabled"]
+ },
+ "ShowCount": {
+ "variantOptions": ["True", "False"]
+ }
+ }
+}
+```2.4: Create Test Structure with TODO Placeholders
+Before analyzing visual differences, create the test file structure:
+
+```typescript
+describe('Badge', () => {
+ // Figma Variant: State=Default, ShowCount=True
+ describe('Default state with count visible', () => {
+ // TODO: Add tests for State=Default, ShowCount=True
+ });
+
+ // Figma Variant: State=Default, ShowCount=False
+ describe('Default state with count hidden', () => {
+ // TODO: Add tests for State=Default, ShowCount=False
+ });
+
+ // Figma Variant: State=Selected, ShowCount=True
+ describe('Selected state with count visible', () => {
+ // TODO: Add tests for State=Selected, ShowCount=True
+ });
+
+ // Figma Variant: State=Selected, ShowCount=False
+ describe('Selected state with count hidden', () => {
+ // TODO: Add tests for State=Selected, ShowCount=False
+ });
+
+ // Figma Variant: State=Disabled, ShowCount=True
+ describe('Disabled state with count visible', () => {
+ // TODO: Add tests for State=Disabled, ShowCount=True
+ });
+
+ // Figma Variant: State=Disabled, ShowCount=False
+ describe('Disabled state with count hidden', () => {
+ // TODO: Add tests for State=Disabled, ShowCount=False
+ });Create Test Structure with TODO Placeholders
+Before writing any test implementations, create describe blocks for ALL variant combinations:
+
+```typescript
+describe('Badge', () => {
+ // Figma Variant: State=Default, ShowCount=True
+ describe('Default state with count visible', () => {
+ // TODO: Add tests for State=Default, ShowCount=True
+ });
+
+ // Figma Variant: State=Default, ShowCount=False
+ describe('Default state with count hidden', () => {
+ // TODO: Add tests for State=Default, ShowCount=False
+ });
+
+ // Figma Variant: State=Selected, ShowCount=True
+ describe('Selected state with count visible', () => {
+ // TODO: Add tests for State=Selected, ShowCount=True
+ });
+
+ // Figma Variant: State=Selected, ShowCount=False
+ describe('Selected state with count hidden', () => {
+ // TODO: Add tests for State=Selected, ShowCount=False
+ });
+
+ // Figma Variant: State=Disabled, ShowCount=True
+ describe('Disabled state with count visible', () => {
+ // TODO: Add tests for State=Disabled, ShowCount=True
+ });
+
+ // Figma Variant: State=Disabled, ShowCount=False
+ describe('Disabled state with count hidden', () => {
+ // TODO: Add tests for State=Disabled, ShowCount=False
+ });
+
+ describe('Element ordering', () => {
+ // TODO: Add structural tests that apply across all variants
+ });
+});
+```
+
+β
All 6 variant combinations have describe blocks with TODOs
+β
Every describe block has variant comment
+β
Structure is ready for test implementations
+
+### Step 5.2: Map Variants to Props
+Based on variant names and tracking table:
+```
+ShowCount=False β count={0} or showCount={false}
+ShowCount=True β count={5} (any number > 0)
+State=Selected β isSelected={true}
+State=Disabled β disabled={true}
+State=Default β No special props (baseline)
+```
+
+### Step 5.2: Fill in Tests from Tracking Table
+**Replace TODOs with actual tests - each row in tracking table β one test:**
+
+```typescript
+describe('Badge', () => {
+ // Figma Variant: ShowCount=False (from tracking table row 1)
+ describe('Empty state', () => {
+ // Figma Variant: ShowCount=False (Conditional Render: Count Badge hidden)
+ it('hides count badge when ShowCount=False', () => {
+ render();
+ expect(screen.queryByTestId('badge-count')).not.toBeInTheDocument();
+ });
+
+ // Figma Variant: ShowCount=False (from tracking table row 2: gap changes from gap-2 to gap-1)
+ it('applies 4px gap when count is hidden', () => {
+ const { container } = render();
+ expect(container.firstChild).toHaveClass('gap-1'); // 4px = gap-1
+ });
+ });
+
+ // Figma Variant: State=Default, ShowCount=True (baseline)
+ describe('Default state', () => {
+ // Figma Variant: State=Default, ShowCount=True
+ it('shows count badge with value', () => {
+ render();
+ expect(screen.getByTestId('badge-count')).toHaveTextContent('5');
+ });
+
+ // Figma Variant: State=Default, ShowCount=True (baseline color: Primary 600)
+ it('applies primary 600 text color in default state', () => {
+ render();
+ expect(screen.getByTestId('badge-label')).toHaveClass('text-primary-600');
+ });
+ });
+
+ // Figma Variant: State=Selected, ShowCount=True (from tracking table rows 3-4)
+ describe('Selected state', () => {
+ // Figma Variant: State=Selected, ShowCount=True (from tracking table row 3: color changes from primary-600 to primary-800)
+ it('applies primary 800 text color when selected', () => {
+ render();
+ expect(screen.getByTestId('badge-label')).toHaveClass('text-primary-800');
+ });
+
+ // Figma Variant: State=Selected, ShowCount=True (from tracking table row 4: background changes from transparent to bg-primary-50)
+ it('applies background highlight when selected', () => {
+ render();
+ expect(screen.getByTestId('badge-root')).toHaveClass('bg-primary-50');
+ });
+ });
+
+ // Figma Variant: State=Disabled (from tracking table rows 5-6)
+ describe('Disabled state', () => {
+ // Figma Variant: State=Disabled (from tracking table row 5: color changes from primary-600 to gray-400)
+ it('applies gray color when disabled', () => {
+ render();
+ expect(screen.getByTestId('badge-label')).toHaveClass('text-gray-400');
+ });
+
+ // Figma Variant: State=Disabled (from tracking table row 6: cursor changes to not-allowed)
+ it('shows not-allowed cursor when disabled', () => {
+ render();
+ expect(screen.getByTestId('badge-root')).toHaveClass('cursor-not-allowed');
+ });
+ });
+
+ describe('Element ordering', () => {
+ // Figma Variant: All variants (structural consistency)
+ it('renders label before count badge', () => {
+ render();
+ const container = screen.getByTestId('badge-root');
+ const children = Array.from(container.children);
+ expect(children[0]).toHaveAttribute('data-testid', 'badge-label');
+ expect(children[1]).toHaveAttribute('data-testid', 'badge-count');
+ });
+ });
+});
+```
+
+### Step 5.4: Complete Validation Checkpoint
+Before writing component implementation, verify:
+- β
Step 2.4 completed: All describe blocks created with TODO placeholders
+- β
Every describe block has `// Figma Variant: Property=Value` comment
+- β
All TODOs replaced with actual test implementations
+- β
Tracking table created with 6 rows (one per visual difference)
+- β
Every row in table has a corresponding unit test (6 tests generated)
+- β
Every individual test has `// Figma Variant: Property=Value` comment with FULL details
+- β
Before/After values in tests match tracking table
+- β
Primary 600, Primary 800, Primary 50, and Gray 400 colors matched to `mcp_figma_get_variable_defs`
+- β
Gap values (4px, 8px) matched to design variables (Spacing/xs, Spacing/md)
+- β
Selected state tests include both text color AND background changes
+- β
All 6 variant combinations have test coverage
+
+## Integration with Other Skills
+
+This skill works alongside:
+- **figma-component-sync**: Validates component matches Figma after implementation
+- **tailwind-utility-simplification**: Ensures test assertions use correct Tailwind classes
+- **cross-package-types**: Verifies component prop types align with shared types
+
+## Validation Tool
+
+This skill includes a validation script to check test quality:
+
+```bash
+# Validate a test file
+.github/skills/component-set-testing/scripts/validate-tests.sh \
+ path/to/ComponentName.test.tsx
+```
+
+The script checks for:
+- β Variant comment coverage (all tests have `// Figma Variant:` comments)
+- β Selection state coverage (hover tests with `userEvent.hover()`)
+- β Selection state tests (props like `isSelected`, `isActive`, `isCurrent`)
+- β Tooltip/overlay tests
+- β Value precision (flagged arbitrary Tailwind values)
+
+**Exit codes:**
+- `0` = All checks passed
+- `1` = Missing variant comments or other critical issues
+
+Use this before committing test files or as part of CI validation.
+
+## Resources
+
+- [Figma Component Sets Documentation](https://help.figma.com/hc/en-us/articles/360056440594-Create-and-use-variants)
diff --git a/.github/skills/component-set-test-driven-development/scripts/README.md b/.github/skills/component-set-test-driven-development/scripts/README.md
new file mode 100644
index 00000000..21b01777
--- /dev/null
+++ b/.github/skills/component-set-test-driven-development/scripts/README.md
@@ -0,0 +1,106 @@
+# Component Set Testing Validation Script
+
+This script validates that test files follow the component-set-testing skill requirements.
+
+## Usage
+
+```bash
+# Validate a single test file
+.github/skills/component-set-testing/scripts/validate-tests.sh \
+ packages/client/src/components/MyComponent/MyComponent.test.tsx
+
+# Example output
+π Component Set Test Validator
+================================
+
+π Validating: packages/client/src/components/MyComponent/MyComponent.test.tsx
+
+β Check 1: Variant Comment Coverage
+ β All tests have variant comments (15/15)
+
+β Check 2: Interactive State Coverage
+ β Hover tests found (3 occurrences)
+ β Selection/active state tests found
+
+β Check 3: Tooltip/Overlay Testing
+ β Tooltip/overlay tests found
+
+β Check 4: Value Precision
+ β No arbitrary values detected
+
+β
PASSED: All critical checks passed
+```
+
+## What It Checks
+
+### 1. Variant Comment Coverage (REQUIRED)
+Every `it()` and `describe()` block must have a comment in the format:
+```typescript
+// Figma Variant: State=Rest, Count=True
+it('renders correctly', () => { ... });
+```
+
+**This is the only check that causes failure.** The script enforces this as non-negotiable.
+
+### 2. Interactive State Coverage (Context-Aware)
+The script **reads your variant comments** to understand what states exist, then checks if appropriate tests are present:
+
+- If it finds `State=Hover` β Looks for `userEvent.hover()` tests
+- If it finds `State=Selected` or `State=Active` β Looks for selection-related tests
+- If it finds `State=Focus` β Looks for focus/keyboard tests
+- If it finds `State=Disabled` β Looks for disabled state tests
+
+**Warnings only** - Won't fail the build, just suggests improvements based on your actual variants.
+
+Example:
+```typescript
+// Figma Variant: State=Hover
+describe('Hover state', () => {
+ // β Script sees "State=Hover" and checks for hover interaction tests
+ it('highlights on hover', async () => {
+ await userEvent.hover(element); // β Found!
+ });
+});
+```
+
+### 3. Value Precision
+- Flags arbitrary Tailwind values like `h-[21px]` or `w-[18px]`
+- These should match exact design variable values from Figma
+- **Warning only** - Reminds you to verify against `get_variable_defs` results
+
+## Exit Codes
+
+- `0` - All checks passed (tests can be committed)
+- `1` - Critical issues found (missing variant comments)
+
+## Integration with CI
+
+Add to your CI pipeline:
+
+```yaml
+# .github/workflows/test.yml
+- name: Validate Component Set Tests
+ run: |
+ find packages/client/src/components -name "*.test.tsx" -type f | while read file; do
+ if grep -q "Figma Variant:" "$file"; then
+ .github/skills/component-set-testing/scripts/validate-tests.sh "$file"
+ fi
+ done
+```
+
+## Fixing Validation Errors
+
+If you see:
+```
+β Line 42: Missing variant comment - "renders upvote button"
+```
+
+Add a variant comment above the test:
+```typescript
+// Figma Variant: State=Rest, Count=False
+it('renders upvote button', () => {
+ // test implementation
+});
+```
+
+Every test must document which Figma variant combination it validates.
diff --git a/.github/skills/component-set-test-driven-development/scripts/validate-tests.sh b/.github/skills/component-set-test-driven-development/scripts/validate-tests.sh
new file mode 100755
index 00000000..dfa42418
--- /dev/null
+++ b/.github/skills/component-set-test-driven-development/scripts/validate-tests.sh
@@ -0,0 +1,280 @@
+#!/bin/bash
+
+# Component Set Test Validation Script
+# Validates that test files follow component-set-testing skill requirements
+# Usage: ./validate-tests.sh
+
+set -e
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+# Counters
+TOTAL_TESTS=0
+TESTS_WITH_COMMENTS=0
+MISSING_COMMENTS=0
+ERRORS=()
+
+echo "π Component Set Test Validator"
+echo "================================"
+echo ""
+
+# Check if file path provided
+if [ -z "$1" ]; then
+ echo -e "${RED}Error: No test file specified${NC}"
+ echo "Usage: $0 "
+ echo ""
+ echo "Example:"
+ echo " $0 packages/client/src/components/ReactionStatistics/ReactionStatistics.test.tsx"
+ exit 1
+fi
+
+TEST_FILE="$1"
+
+# Check if file exists
+if [ ! -f "$TEST_FILE" ]; then
+ echo -e "${RED}Error: File not found: $TEST_FILE${NC}"
+ exit 1
+fi
+
+echo "π Validating: $TEST_FILE"
+echo ""
+
+# ============================================================================
+# Check 1: Variant Comment Coverage
+# ============================================================================
+echo "β Check 1: Variant Comment Coverage"
+echo " Looking for: // Figma Variant: Property=Value"
+echo ""
+
+# Find all it() and describe() blocks
+while IFS=: read -r LINE_NUM line; do
+ # Check if line contains it( or describe(
+ if echo "$line" | grep -qE "(it|describe)\s*\("; then
+ TOTAL_TESTS=$((TOTAL_TESTS + 1))
+
+ # Check if previous line (or within 2 lines above) has variant comment
+ START_LINE=$((LINE_NUM - 3))
+ if [ $START_LINE -lt 1 ]; then
+ START_LINE=1
+ fi
+
+ # Extract context around the test
+ CONTEXT=$(sed -n "${START_LINE},${LINE_NUM}p" "$TEST_FILE")
+
+ if echo "$CONTEXT" | grep -q "// Figma Variant:"; then
+ TESTS_WITH_COMMENTS=$((TESTS_WITH_COMMENTS + 1))
+ else
+ MISSING_COMMENTS=$((MISSING_COMMENTS + 1))
+ # Extract test name
+ TEST_NAME=$(echo "$line" | sed -E "s/.*['\"](.+)['\"].*/\1/" | head -n 1)
+ ERRORS+=("Line $LINE_NUM: Missing variant comment - \"$TEST_NAME\"")
+ fi
+ fi
+done < <(grep -n -E "(it|describe)\s*\(" "$TEST_FILE" || true)
+
+if [ $MISSING_COMMENTS -eq 0 ]; then
+ echo -e " ${GREEN}β All tests have variant comments ($TESTS_WITH_COMMENTS/$TOTAL_TESTS)${NC}"
+else
+ echo -e " ${RED}β Missing variant comments: $MISSING_COMMENTS/$TOTAL_TESTS${NC}"
+fi
+echo ""
+
+# ============================================================================
+# Check 2: Interactive State Coverage (based on variant comments)
+# ============================================================================
+echo "β Check 2: Interactive State Coverage"
+echo ""
+
+# Extract all variant values from comments to understand what the component supports
+VARIANT_COMMENTS=$(grep -o "// Figma Variant:.*" "$TEST_FILE" || true)
+
+# Check if variants mention interactive states (case-insensitive)
+HAS_HOVER_VARIANT=$(echo "$VARIANT_COMMENTS" | grep -iE "(State|Type|Mode)=(Hover|Hovered)" || true)
+HAS_SELECTED_VARIANT=$(echo "$VARIANT_COMMENTS" | grep -iE "(State|Type|Mode)=(Selected|Active|Pressed)" || true)
+HAS_FOCUS_VARIANT=$(echo "$VARIANT_COMMENTS" | grep -iE "(State|Type|Mode)=(Focus|Focused)" || true)
+HAS_DISABLED_VARIANT=$(echo "$VARIANT_COMMENTS" | grep -iE "(State|Type|Mode)=(Disabled|Inactive)" || true)
+
+# Only check for interaction tests if variants suggest them
+if [ -n "$HAS_HOVER_VARIANT" ]; then
+ if grep -qE "(userEvent\.hover|\.hover\()" "$TEST_FILE"; then
+ echo -e " ${GREEN}β Hover variant detected, hover tests found${NC}"
+ else
+ echo -e " ${YELLOW}β Hover variant detected, but no hover interaction tests found${NC}"
+ echo " Consider adding userEvent.hover() tests for hover state"
+ fi
+fi
+
+if [ -n "$HAS_SELECTED_VARIANT" ]; then
+ if grep -qE "(Selected|Active|Pressed)" "$TEST_FILE"; then
+ echo -e " ${GREEN}β Selection/active variant detected, related tests found${NC}"
+ else
+ echo -e " ${YELLOW}β Selection/active variant detected, but no related tests found${NC}"
+ echo " Consider testing how the component looks when selected/active"
+ fi
+fi
+
+if [ -n "$HAS_FOCUS_VARIANT" ]; then
+ if grep -qE "(\.focus\(\)|userEvent\.tab)" "$TEST_FILE"; then
+ echo -e " ${GREEN}β Focus variant detected, focus tests found${NC}"
+ else
+ echo -e " ${YELLOW}β Focus variant detected, but no focus tests found${NC}"
+ echo " Consider adding keyboard navigation tests"
+ fi
+fi
+
+if [ -n "$HAS_DISABLED_VARIANT" ]; then
+ if grep -qE "disabled" "$TEST_FILE"; then
+ echo -e " ${GREEN}β Disabled variant detected, disabled tests found${NC}"
+ else
+ echo -e " ${YELLOW}β Disabled variant detected, but no disabled state tests found${NC}"
+ echo " Consider testing disabled behavior"
+ fi
+fi
+
+# If no interactive variants detected, that's fine
+if [ -z "$HAS_HOVER_VARIANT" ] && [ -z "$HAS_SELECTED_VARIANT" ] && [ -z "$HAS_FOCUS_VARIANT" ] && [ -z "$HAS_DISABLED_VARIANT" ]; then
+ echo -e " ${GREEN}β No interactive state variants detected${NC}"
+fi
+echo ""
+
+# ============================================================================
+# Check 3: TODO Completion (Phase 5 Implementation Check)
+# ============================================================================
+echo "β Check 3: TODO Completion"
+echo " Checking that all test TODOs have been implemented..."
+echo ""
+
+# Count TODO comments in test file
+TODO_COUNT=$(grep -c "// TODO" "$TEST_FILE" || true)
+
+if [ $TODO_COUNT -eq 0 ]; then
+ echo -e " ${GREEN}β No TODOs remaining - all tests implemented${NC}"
+else
+ echo -e " ${RED}β Found $TODO_COUNT TODO comments still in test file${NC}"
+ echo " Phase 5 implementation is incomplete. Found TODOs at:"
+ grep -n "// TODO" "$TEST_FILE" | head -n 5 | while read -r line; do
+ echo -e " ${YELLOW}$line${NC}"
+ done
+ if [ $TODO_COUNT -gt 5 ]; then
+ echo " ... and $((TODO_COUNT - 5)) more"
+ fi
+ ERRORS+=("Found $TODO_COUNT unimplemented TODOs - Phase 5 incomplete")
+fi
+echo ""
+
+# ============================================================================
+# Check 4: Spacing Test Coverage
+# ============================================================================
+echo "β Check 4: Spacing Test Coverage"
+echo " Checking for padding, margin, gap tests..."
+echo ""
+
+HAS_SPACING_TESTS=false
+if grep -qE "(toHaveClass.*['\"].*[pmg][xytblr]?-)" "$TEST_FILE"; then
+ HAS_SPACING_TESTS=true
+ echo -e " ${GREEN}β Spacing tests found (padding/margin/gap)${NC}"
+else
+ echo -e " ${YELLOW}β No spacing tests detected${NC}"
+ echo " Consider testing padding (p-*), margin (m-*), and gap (gap-*)"
+ echo " These are commonly missed but critical for design fidelity"
+fi
+echo ""
+
+# ============================================================================
+# Check 5: Pseudo-Class Test Coverage
+# ============================================================================
+echo "β Check 5: Pseudo-Class Test Coverage"
+echo " Checking for hover:, focus:, active: style tests..."
+echo ""
+
+HAS_PSEUDO_TESTS=false
+if grep -qE "toHaveClass.*['\"].*((hover|focus|active|disabled):)" "$TEST_FILE"; then
+ HAS_PSEUDO_TESTS=true
+ echo -e " ${GREEN}β Pseudo-class style tests found${NC}"
+else
+ # Check if interactive variants exist
+ if [ -n "$HAS_HOVER_VARIANT" ] || [ -n "$HAS_FOCUS_VARIANT" ] || [ -n "$HAS_DISABLED_VARIANT" ]; then
+ echo -e " ${YELLOW}β Interactive variants detected but no pseudo-class tests found${NC}"
+ echo " Consider testing hover:, focus:, active:, disabled: utility classes"
+ echo " Example: expect(element).toHaveClass('hover:text-primary-600')"
+ else
+ echo -e " ${GREEN}β No interactive variants - pseudo-class tests not required${NC}"
+ fi
+fi
+echo ""
+
+# ============================================================================
+# Check 6: Value Precision (Design Variables)
+# ============================================================================
+echo "β Check 6: Value Precision"
+echo ""
+
+HAS_ARBITRARY_VALUES=false
+if grep -qE "\[((\d+px)|(\d+rem))\]" "$TEST_FILE"; then
+ HAS_ARBITRARY_VALUES=true
+ echo -e " ${YELLOW}β Arbitrary Tailwind values found (e.g., h-[21px])${NC}"
+ echo " Verify these match exact design variable values"
+else
+ echo -e " ${GREEN}β No arbitrary values detected${NC}"
+fi
+echo ""
+
+# ============================================================================
+# Summary Report
+# ============================================================================
+echo "================================"
+echo "π Validation Summary"
+echo "================================"
+echo ""
+echo "Test Coverage:"
+echo " Total tests/describes: $TOTAL_TESTS"
+echo " With variant comments: $TESTS_WITH_COMMENTS"
+echo " Missing comments: $MISSING_COMMENTS"
+echo " Remaining TODOs: $TODO_COUNT"
+echo ""
+
+# Print errors
+if [ ${#ERRORS[@]} -gt 0 ]; then
+ echo -e "${RED}β Validation Errors:${NC}"
+ echo ""
+ for error in "${ERRORS[@]}"; do
+ echo -e " ${RED}β’${NC} $error"
+ done
+ echo ""
+fi
+
+# Final verdict
+if [ $MISSING_COMMENTS -eq 0 ] && [ $TODO_COUNT -eq 0 ]; then
+ echo -e "${GREEN}β
PASSED: All critical checks passed${NC}"
+ echo ""
+ echo "Summary:"
+ echo " β All tests have Figma Variant comments"
+ echo " β No TODOs remaining - implementation complete"
+ if [ "$HAS_SPACING_TESTS" = true ]; then
+ echo " β Spacing tests present"
+ fi
+ if [ "$HAS_PSEUDO_TESTS" = true ] || [ -z "$HAS_HOVER_VARIANT$HAS_FOCUS_VARIANT$HAS_DISABLED_VARIANT" ]; then
+ echo " β Interactive state tests appropriate"
+ fi
+ echo ""
+ exit 0
+else
+ echo -e "${RED}β FAILED: Fix errors before proceeding to Phase 6 (component implementation)${NC}"
+ echo ""
+ if [ $MISSING_COMMENTS -gt 0 ]; then
+ echo "Required variant comment format:"
+ echo " // Figma Variant: State=Rest, Count=True"
+ echo " it('test name', () => { ... });"
+ echo ""
+ fi
+ if [ $TODO_COUNT -gt 0 ]; then
+ echo "All TODOs must be replaced with actual test implementations."
+ echo "Review Phase 3 tracking table and implement remaining tests."
+ echo ""
+ fi
+ exit 1
+fi
diff --git a/.github/skills/figma-url-router/INTEGRATION.md b/.github/skills/figma-url-router/INTEGRATION.md
new file mode 100644
index 00000000..9d8fd6ab
--- /dev/null
+++ b/.github/skills/figma-url-router/INTEGRATION.md
@@ -0,0 +1,140 @@
+# Integrating figma-url-router Skill into implement-story Mode
+
+## Problem
+The current implement-story mode doesn't automatically detect Figma Component Sets, leading to:
+- Tests written AFTER implementation (not TDD)
+- Missing test coverage for variant combinations
+- Manual detection required from spec authors
+
+## Solution
+Update the implement-story mode instructions to use the figma-url-router skill as the first step when Figma URLs are present.
+
+## Where to Update
+
+The mode instructions are defined in the system prompt configuration. You need to update the `` section for "implement-story" mode.
+
+### Current Step 2 (Design Specification Discovery):
+
+```markdown
+## Step 2: Design Specification Discovery
+
+### **Figma Component Sets**
+
+When the Figma link points to a Component Set (multi-variant component):
+- Discover all variant properties and their possible values
+- Generate tests for every variant combination
+- Track all values against design variables
+- Map variants to application and interaction states
+
+### **Single Components or Screens**
+
+For non-Component Set designs:
+1. **Visual Context:**
+ - Call `mcp_figma_get_screenshot` for the design
+ - Call `mcp_figma_get_design_context` to understand structure
+ ...
+```
+
+### Updated Step 2 (with automatic detection):
+
+```markdown
+## Step 2: Design Specification Discovery
+
+### **Automatic Figma Node Detection**
+
+β οΈ **CRITICAL:** If Figma URLs are present in the spec, use the `figma-url-router` skill FIRST.
+
+**Before any implementation or design analysis:**
+1. For each Figma URL in the spec:
+ - Extract fileKey and nodeId
+ - Call `mcp_figma_get_design_context` to detect node type
+ - Route to appropriate workflow
+
+**Routing logic:**
+- **Component Set detected** β Use component-set-testing skill (TDD - tests first!)
+- **Single Component/Frame** β Standard implementation flow
+- Document the routing decision
+
+### **After Routing: Component Set Workflow (TDD)**
+
+When routed to Component Set:
+- DO NOT write component code yet
+- Follow component-set-testing skill exactly:
+ 1. Discover all variant properties and values
+ 2. Generate failing tests for every variant combination
+ 3. Track all values against design variables
+ 4. Map variants to application and interaction states
+ 5. Write tests FIRST (they should fail)
+ 6. Then implement component to pass tests
+
+### **After Routing: Standard Component Workflow**
+
+For non-Component Set designs:
+1. **Visual Context:**
+ - Call `mcp_figma_get_screenshot` for the design
+ - Call `mcp_figma_get_design_context` to understand structure
+ ...
+```
+
+## Benefits
+
+1. **No manual detection**: Agent automatically identifies Component Sets
+2. **Enforces TDD**: Component Sets always get tests-first approach
+3. **Consistent workflow**: Same process every time
+4. **Spec simplicity**: Authors don't need to know Figma terminology
+
+## Example Flow
+
+**Before (manual):**
+```
+Spec has Figma URL β Agent assumes single component β Implements β Tests after
+```
+
+**After (automatic):**
+```
+Spec has Figma URL
+β Agent uses figma-url-router skill
+β Detects COMPONENT_SET
+β Routes to component-set-testing skill
+β Writes tests first (TDD)
+β Then implements
+```
+
+## Checklist for Integration
+
+- [ ] Add figma-url-router to skills registry
+- [ ] Update mode instructions Step 2
+- [ ] Test with known Component Set URL
+- [ ] Test with single component URL
+- [ ] Verify TDD is followed for Component Sets
+- [ ] Document in team workflow docs
+
+## Testing the Integration
+
+Use this spec to test:
+
+```markdown
+# Test: Automatic Component Set Detection
+
+As a developer, I want to verify the figma-url-router automatically detects Component Sets.
+
+Designs:
+- Test component: https://www.figma.com/design/7QW0kJ07DcM36mgQUJ5Dtj/Carton?node-id=1109-10911
+
+Acceptance Criteria:
+GIVEN the agent sees the Figma URL
+THEN it should automatically detect if it's a Component Set
+AND route to the appropriate workflow
+AND write tests first if it's a Component Set
+
+Developer Notes:
+- Do not manually specify node type
+- Let the agent figure it out
+```
+
+Expected behavior:
+1. Agent sees Figma URL
+2. Calls figma-url-router skill
+3. Detects type via `mcp_figma_get_design_context`
+4. Routes appropriately
+5. If Component Set: writes tests BEFORE component
diff --git a/.github/skills/figma-url-router/SKILL.md b/.github/skills/figma-url-router/SKILL.md
new file mode 100644
index 00000000..54ddf887
--- /dev/null
+++ b/.github/skills/figma-url-router/SKILL.md
@@ -0,0 +1,378 @@
+---
+name: figma-url-router
+description: π REQUIRED FIRST when ANY Figma URL appears in specs. Automatically detects Figma node types (Component Set, Component, Frame) and routes to appropriate implementation workflow (TDD for Component Sets, standard for others). Prevents implementation before proper workflow is established.
+---
+
+# Skill: Figma URL Router
+
+**β οΈ CRITICAL: Use this skill FIRST when you see ANY Figma URL in a specification.**
+
+This skill automatically detects what type of Figma node you're working with and routes you to the correct implementation workflow.
+
+## Purpose
+
+When implementing features with Figma designs, the approach differs significantly based on the node type:
+- **Component Set** β Use TDD with component-set-testing skill
+- **Single Component** β Standard implementation flow
+- **Frame/Section** β UI layout implementation
+
+Instead of requiring spec authors to identify the type, this skill automatically detects it.
+
+## When to Use
+
+**REQUIRED** - Use this skill when:
+- ANY Figma URL appears in a specification
+- You see `figma.com/design/` or `figma.com/file/` links
+- Before writing any component or implementation code
+- At the start of implement-story mode
+
+**Flow:**
+```
+See Figma URL β Use figma-url-router skill β Get routed to correct workflow
+```
+
+## Detection Process
+
+### Step 1: Extract Figma Parameters
+
+Parse the URL to extract:
+- `fileKey`: The file identifier
+- `nodeId`: The specific node (convert `node-id=1-2` to `1:2`)
+
+**URL patterns:**
+```
+https://figma.com/design/:fileKey/:fileName?node-id=1-2
+https://figma.com/file/:fileKey/:fileName?node-id=1-2
+```
+
+### Step 2: Query Node Type
+
+Call `mcp_figma_get_design_context` to identify the node:
+
+```typescript
+const context = await mcp_figma_get_design_context({
+ nodeId: "1:2",
+ fileKey: "abc123"
+});
+
+// Check the type
+const nodeType = context.document.type;
+```
+
+### Step 3: Route to Appropriate Workflow
+
+Based on `document.type`, route to the correct approach:
+
+| Node Type | Workflow | Skill to Use |
+|-----------|----------|--------------|
+| `COMPONENT_SET` | Test-Driven Development | component-set-testing |
+| `COMPONENT` | Standard implementation | (none - regular flow) |
+| `FRAME` | UI layout implementation | (none - regular flow) |
+| `SECTION` | UI layout implementation | (none - regular flow) |
+| `GROUP` | UI layout implementation | (none - regular flow) |
+
+## Routing Logic
+
+### Route 1: Component Set (TDD Required)
+
+```typescript
+if (context.document.type === "COMPONENT_SET") {
+ console.log("β
Detected COMPONENT_SET - Using TDD approach");
+ console.log("π Routing to: component-set-testing skill");
+
+ // DO NOT write component code yet
+ // Follow component-set-testing skill:
+ // 1. Discover all variant combinations
+ // 2. Write failing tests first
+ // 3. Implement component to pass tests
+
+ return {
+ workflow: "test-driven-development",
+ skill: "component-set-testing",
+ shouldWriteTestsFirst: true
+ };
+}
+```
+
+**Next steps:**
+1. Read the component-set-testing skill
+2. Follow TDD process exactly
+3. Tests before implementation
+
+### Route 2: Single Component
+
+```typescript
+if (context.document.type === "COMPONENT") {
+ console.log("β
Detected single COMPONENT");
+ console.log("π Using standard implementation flow");
+
+ // Standard flow:
+ // 1. Get screenshot
+ // 2. Get variable definitions
+ // 3. Implement component
+ // 4. Write tests
+
+ return {
+ workflow: "standard-implementation",
+ skill: null,
+ shouldWriteTestsFirst: false
+ };
+}
+```
+
+**Next steps:**
+1. Call `mcp_figma_get_screenshot` for visual reference
+2. Call `mcp_figma_get_variable_defs` to track all values
+3. Implement component matching design exactly
+4. Write tests to verify implementation
+
+### Route 3: Frame/Section/Group (Layout)
+
+```typescript
+if (["FRAME", "SECTION", "GROUP"].includes(context.document.type)) {
+ console.log("β
Detected layout node:", context.document.type);
+ console.log("π Using UI layout implementation");
+
+ // Layout flow:
+ // 1. Get screenshot for hierarchy
+ // 2. Identify nested components
+ // 3. Implement layout structure
+ // 4. Add component integration tests
+
+ return {
+ workflow: "layout-implementation",
+ skill: null,
+ shouldWriteTestsFirst: false
+ };
+}
+```
+
+**Next steps:**
+1. Capture screenshots
+2. Identify component composition
+3. Implement layout
+4. Test integration points
+
+## Complete Workflow
+
+```mermaid
+flowchart TD
+ A[Spec contains Figma URL] --> B[Extract fileKey & nodeId]
+ B --> C[Call mcp_figma_get_design_context]
+ C --> D{Check document.type}
+ D -->|COMPONENT_SET| E[β
Route to component-set-testing]
+ D -->|COMPONENT| F[β
Route to standard flow]
+ D -->|FRAME/SECTION/GROUP| G[β
Route to layout flow]
+ E --> H[Write tests FIRST]
+ H --> I[Then implement component]
+ F --> J[Get design details]
+ J --> K[Implement then test]
+ G --> L[Implement layout]
+ L --> M[Test integration]
+```
+
+## Integration with implement-story Mode
+
+**π MODE INSTRUCTION REQUIREMENT:**
+
+The implement-story mode instructions MUST call this skill as the FIRST step when Figma URLs are present. This is not optional.
+
+**Required mode instruction update:**
+```markdown
+## Step 2: Design Discovery
+
+**π MANDATORY FIRST STEP: Route Figma URLs**
+
+Before ANY design analysis or implementation:
+
+1. **Scan for Figma URLs** in the spec
+2. **For EACH Figma URL found:**
+ - Use figma-url-router skill immediately
+ - Do NOT analyze or implement anything yet
+ - Wait for routing decision before proceeding
+3. **Follow routed workflow** exactly as specified
+
+**Routing outcomes:**
+- COMPONENT_SET detected β Use component-set-testing skill (TDD required)
+- Other node types β Continue with standard flow
+
+**Only AFTER routing all Figma URLs:**
+- Get screenshots for reference
+- Get variable definitions
+- Track all values
+- Proceed with implementation per routed workflow
+```
+
+## Validation & Enforcement
+
+**After routing to component-set-testing skill:**
+
+The router should verify the TDD workflow was actually followed by checking:
+
+1. **Test file exists** before component file
+2. **Validation script was run** (check for validate-tests.sh execution)
+3. **Tracking table was created** (should be documented in conversation)
+4. **All tests have variant comments** (validated by script)
+
+**If validation fails:**
+```markdown
+π ERROR: TDD workflow not followed for Component Set
+
+Detected: COMPONENT_SET node type
+Expected: Test-Driven Development workflow via component-set-testing skill
+Found: Component implementation without proper TDD process
+
+REQUIRED ACTIONS:
+1. Delete component implementation file
+2. Return to component-set-testing skill Phase 0
+3. Create tracking table from Figma variants
+4. Write tests first (Phase 1-5)
+5. Run validation script (.github/skills/component-set-testing/scripts/validate-tests.sh)
+6. Only after script passes β implement component (Phase 6)
+
+Proceeding without this process results in:
+- Broken interaction handlers
+- Incorrect spacing
+- Misaligned hover behavior
+- Overlays that shift page content
+- Component not matching Figma design
+```
+
+## Reporting Routing Decisions
+
+When this skill is invoked, provide clear output to the user:
+
+**Format:**
+```markdown
+π Figma URL Router Analysis
+
+**URL:** https://figma.com/design/:fileKey/:fileName?node-id=1109-10682
+**File Key:** :fileKey
+**Node ID:** 1109:10682
+
+**Detection Result:**
+- Node Type: COMPONENT_SET
+- Variant Properties Detected: State, Count
+- Total Variants: 12 combinations
+
+**Routing Decision:**
+β
Routed to: component-set-testing skill
+β οΈ Workflow: Test-Driven Development (TDD)
+π CRITICAL: Tests must be written BEFORE component implementation
+
+**Next Steps:**
+1. Do NOT create component file yet
+2. Follow component-set-testing skill phases 0-5
+3. Create tracking table of all visual differences
+4. Write comprehensive tests from tracking table
+5. Run validation script: .github/skills/component-set-testing/scripts/validate-tests.sh
+6. Only after validation passes β create component file (Phase 6)
+```
+
+This explicit reporting makes it impossible to miss that TDD is required.
+
+## Multiple Figma URLs
+
+If spec contains multiple Figma URLs:
+
+1. **Process each URL separately** through this router
+2. **Track routing decisions** for each
+3. **Group by workflow type**:
+ - All Component Sets β Use TDD
+ - All single components β Standard flow
+ - Mixed β Use most appropriate for each
+
+Example:
+```markdown
+Figma URLs in spec:
+1. Button Component Set β TDD workflow
+2. Header layout (Frame) β Layout workflow
+3. Icon (Component) β Standard workflow
+
+Implementation order:
+1. Button (TDD - tests first)
+2. Icon (standard)
+3. Header (layout - uses Button & Icon)
+```
+
+## Example Usage
+
+**Spec contains:**
+```markdown
+Design: https://figma.com/design/7QW0kJ07DcM36mgQUJ5Dtj/Design?node-id=1109-10911
+```
+
+**Agent process:**
+1. See Figma URL β Trigger figma-url-router skill
+2. Extract: `fileKey=7QW0kJ07DcM36mgQUJ5Dtj`, `nodeId=1109:10911`
+3. Call `mcp_figma_get_design_context(nodeId, fileKey)`
+4. Response shows: `document.type = "COMPONENT_SET"`
+5. Route to: component-set-testing skill (TDD)
+6. Do NOT create component file
+7. Write tests first
+8. Then implement
+
+## Critical Rules
+
+1. **π ALWAYS use this skill first** when seeing Figma URLs - This is MANDATORY, not optional
+2. **π NEVER assume node type** - always query with `get_design_context`
+3. **π Route before ANY implementation** - don't write ANY code before routing
+4. **π Follow routed workflow strictly** - especially TDD for Component Sets
+5. **π Report routing decision** - use the explicit format shown above so it's crystal clear
+6. **π Validate TDD compliance** - for Component Sets, verify validation script was run
+
+## Enforcement Mechanisms
+
+**For AI agents implementing features:**
+
+1. **Mode instructions MUST call this skill** - Update implement-story mode to require this as Step 2 first action
+2. **No implementation before routing** - Reject any component/code creation before routing decision
+3. **Validation required for Component Sets** - Must run validate-tests.sh script before component creation
+4. **Explicit reporting required** - Must output routing decision in the specified format
+
+**For code reviewers:**
+
+When reviewing Component Set implementations:
+```bash
+# Check if validation script was run during implementation
+grep -r "validate-tests.sh" conversation_history
+
+# Check if test file was created before component file
+ls -lt ComponentName.test.tsx ComponentName.tsx
+# Test file should have earlier timestamp
+
+# Run validation script now
+.github/skills/component-set-testing/scripts/validate-tests.sh \
+ path/to/ComponentName.test.tsx
+```
+
+## Common Mistakes to Avoid
+
+β **Don't** skip this skill and implement directly
+β **Don't** assume it's a single component without checking
+β **Don't** write component code before routing
+β **Don't** ignore Component Set detection (leads to missing tests)
+β **Don't** proceed with implementation if validation script fails
+β **Don't** create component file before test file for Component Sets
+
+β
**Do** use this skill for EVERY Figma URL
+β
**Do** query the node type first
+β
**Do** follow the routed workflow exactly
+β
**Do** use TDD when routed to component-set-testing
+β
**Do** run validation script for Component Sets
+β
**Do** report routing decision explicitly
+β
**Do** create test file before component file
+
+## Benefits
+
+- **Automatic detection**: No manual inspection needed
+- **Correct workflow**: Always use right approach for node type
+- **TDD enforcement**: Component Sets automatically get proper test coverage
+- **Consistency**: Same process every time
+- **Spec simplicity**: Authors don't need to know Figma node types
+
+## Related Skills
+
+- **component-set-testing**: Used when routed to TDD workflow
+- **figma-component-sync**: Used for verification after implementation
+- **tailwind-utility-simplification**: Used during implementation phase
diff --git a/.github/skills/mcp-error-handling/SKILL.md b/.github/skills/mcp-error-handling/SKILL.md
new file mode 100644
index 00000000..59a74cc4
--- /dev/null
+++ b/.github/skills/mcp-error-handling/SKILL.md
@@ -0,0 +1,184 @@
+---
+name: mcp-error-handling
+description: Handle MCP (Model Context Protocol) errors by stopping all operations, reporting the issue clearly, and asking the user for next steps. Use whenever any MCP tool call returns an error, fails, or produces unexpected results.
+---
+
+# Skill: MCP Error Handling
+
+This skill defines the protocol for handling errors from MCP tools (Figma, Container, Prisma, etc.).
+
+## When to Apply
+
+Apply this skill **immediately** when:
+- Any MCP tool call returns an error message
+- An MCP tool produces unexpected or incomplete results
+- Connection issues occur with MCP services
+- Authentication or permission errors are encountered
+- Any tool prefixed with `mcp_` fails in any way
+
+## Error Response Protocol
+
+When an MCP error occurs, you MUST:
+
+### 1. Stop All Operations Immediately
+
+- **Halt** any planned tool calls
+- **Abort** multi-step workflows
+- **Cancel** pending operations
+- Do NOT attempt to retry or work around the error
+- Do NOT proceed with the task using incomplete data
+
+### 2. Report the Issue Clearly
+
+Provide a clear, structured error report:
+
+```markdown
+β οΈ **MCP Error Detected**
+
+**Tool:** [name of the MCP tool that failed]
+**Operation:** [what you were trying to do]
+**Error Message:** [exact error message received]
+
+**Context:**
+- [Any relevant context about what led to this error]
+- [What you were attempting to accomplish]
+
+**Impact:**
+- [What cannot be completed due to this error]
+- [Any data that may be incomplete or missing]
+```
+
+### 3. Ask for User Direction
+
+After reporting, ask the user explicitly what they want to do:
+
+```markdown
+**What would you like me to do next?**
+
+Options:
+1. Retry the operation
+2. Try an alternative approach (if available)
+3. Skip this step and continue with the rest of the task
+4. Stop and troubleshoot the MCP connection
+5. Provide more information about the error
+
+Please let me know how you'd like to proceed.
+```
+
+## Example Error Response
+
+```markdown
+β οΈ **MCP Error Detected**
+
+**Tool:** mcp_figma_get_design_context
+**Operation:** Fetching design context for node 1109:10911
+**Error Message:** "Failed to fetch design data: Authentication token expired"
+
+**Context:**
+- Attempting to retrieve upvote component design from Figma
+- This is needed to extract exact design values for implementation
+- No design data was retrieved
+
+**Impact:**
+- Cannot extract exact spacing, colors, or typography values
+- Cannot proceed with component implementation without design specs
+- Design fidelity cannot be guaranteed
+
+**What would you like me to do next?**
+
+Options:
+1. Retry the operation (may require re-authentication)
+2. Use cached design data if available
+3. Proceed with estimated values (not recommended)
+4. Troubleshoot the Figma MCP connection
+5. Provide the design values manually
+
+Please let me know how you'd like to proceed.
+```
+
+## Common MCP Error Types
+
+### Authentication Errors
+- Token expired
+- Invalid credentials
+- Permission denied
+
+**Response:** Report and ask if user wants to re-authenticate
+
+### Connection Errors
+- Network timeout
+- Service unavailable
+- Rate limit exceeded
+
+**Response:** Report and ask if user wants to retry or wait
+
+### Data Errors
+- Resource not found
+- Invalid node ID
+- Malformed request
+
+**Response:** Report and ask user to verify the input parameters
+
+### Unexpected Results
+- Empty or null response
+- Incomplete data
+- Unexpected format
+
+**Response:** Report what was received vs. what was expected
+
+## What NOT to Do
+
+β **Don't** continue working with incomplete or missing data
+β **Don't** make assumptions about missing values
+β **Don't** retry automatically without user permission
+β **Don't** suppress or hide error details
+β **Don't** attempt workarounds without user approval
+β **Don't** proceed to the next step in a workflow
+
+## Integration with Other Skills
+
+When working with other skills that depend on MCP tools:
+
+- **component-set-testing**: Stop TDD workflow if Figma MCP fails
+- **figma-url-router**: Halt routing if design context cannot be retrieved
+- **figma-component-sync**: Abort sync if either Figma or local data is unavailable
+- **cross-package-types**: Continue only if Prisma MCP operations succeed
+
+## Recovery After Error Resolution
+
+Once the user provides direction:
+
+1. **If retrying:** Acknowledge and execute the retry
+2. **If using alternative:** Explain the alternative approach before proceeding
+3. **If skipping:** Summarize what will be skipped and its impact
+4. **If stopping:** Provide summary of completed vs. incomplete work
+
+## Testing MCP Connections
+
+If troubleshooting is needed, suggest:
+
+```bash
+# Check MCP server status
+# (command varies by MCP type)
+
+# For Figma MCP:
+# - Verify authentication in VS Code
+# - Check Figma desktop app is running
+# - Verify file access permissions
+
+# For Container MCP:
+# - Check Docker daemon is running
+# - Verify container service accessibility
+
+# For Prisma MCP:
+# - Verify database connection
+# - Check schema file location
+```
+
+## Escalation
+
+If errors persist after user-directed retries:
+1. Document the pattern of failures
+2. Suggest alternative approaches that don't require MCP
+3. Recommend checking MCP server logs or configuration
+4. Offer to continue with manual input if applicable
diff --git a/.github/skills/tailwind-utility-simplification/SKILL.md b/.github/skills/tailwind-utility-simplification/SKILL.md
new file mode 100644
index 00000000..6d4cd1d7
--- /dev/null
+++ b/.github/skills/tailwind-utility-simplification/SKILL.md
@@ -0,0 +1,183 @@
+---
+name: tailwind-utility-simplification
+description: Simplify Tailwind CSS utility classes by using standard spacing scale values instead of arbitrary custom values when possible. Use when writing or reviewing Tailwind classes, especially for padding, margin, gap, and width/height utilities.
+---
+
+# Tailwind Utility Simplification
+
+## Purpose
+
+Ensure Tailwind utility classes are accurate to design documents while using standard Tailwind spacing scale values instead of arbitrary custom values when possible. This improves code readability and maintainability.
+
+## When to Use This Skill
+
+- When implementing UI from Figma designs
+- When reviewing code with Tailwind utility classes
+- When refactoring existing components
+- Before committing changes with Tailwind classes
+
+## Core Principles
+
+1. **Always be accurate to the design document** - Verify exact pixel values from Figma using `mcp_figma_get_variable_defs`
+2. **Simplify when possible** - Use standard Tailwind scale values instead of arbitrary values when they match the design exactly
+
+## Tailwind Default Spacing Scale
+
+Tailwind's default spacing scale uses a base of 4px (1 unit = 0.25rem = 4px):
+
+| Class Value | Pixels | When to Use |
+|-------------|--------|-------------|
+| `0` | 0px | Zero spacing |
+| `px` | 1px | 1 pixel borders/spacing |
+| `0.5` | 2px | Very tight spacing |
+| `1` | 4px | Extra small spacing |
+| `1.5` | 6px | Between xs and sm |
+| `2` | 8px | Small spacing |
+| `2.5` | 10px | Between sm and md |
+| `3` | 12px | Medium-small spacing |
+| `3.5` | 14px | Between md-sm and md |
+| `4` | 16px | Medium spacing |
+| `5` | 20px | Medium-large spacing |
+| `6` | 24px | Large spacing |
+| `7` | 28px | Between lg and xl |
+| `8` | 32px | Extra large spacing |
+| `9` | 36px | 2xl spacing |
+| `10` | 40px | 3xl spacing |
+| `11` | 44px | Between 3xl and 4xl |
+| `12` | 48px | 4xl spacing |
+
+## Examples
+
+### β
Expected Use of Custom Values
+
+Custom values should be used when the design specifies a value NOT in Tailwind's default scale:
+
+```tsx
+// 5px is not in the standard scale (between 1=4px and 1.5=6px)
+
+
+// 18px is not in the standard scale (between 4=16px and 5=20px)
+
+
+// 21px is not in the standard scale
+
+
+// 13px is not in the standard scale
+
+```
+
+### β NOT Expected - Use Standard Classes Instead
+
+```tsx
+// BAD: p-[4px] - Use p-1 instead
+
+
+// GOOD: Use standard Tailwind class
+
+
+// BAD: p-[16px] - Use p-4 instead
+
+
+// GOOD: Use standard Tailwind class
+
+
+// BAD: gap-[8px] - Use gap-2 instead
+
+
+// GOOD: Use standard Tailwind class
+
+
+// BAD: py-[4px] - Use py-1 instead
+
+
+// GOOD: Use standard Tailwind class
+
+
+// BAD: mt-[32px] - Use mt-8 instead
+
+
+// GOOD: Use standard Tailwind class
+
+```
+
+### Mixed Example
+
+```tsx
+// Design specifies: 4px vertical padding, 8px horizontal gap, 18px icon size
+// BEFORE (all arbitrary values):
+
+
+
+
+// AFTER (simplified where possible):
+
+
+
+```
+
+## Common Utility Types to Check
+
+### Padding/Margin
+- `p-[4px]` β `p-1`
+- `px-[8px]` β `px-2`
+- `py-[16px]` β `py-4`
+- `mt-[24px]` β `mt-6`
+- `mb-[12px]` β `mb-3`
+
+### Gap (Flexbox/Grid)
+- `gap-[4px]` β `gap-1`
+- `gap-[8px]` β `gap-2`
+- `gap-x-[16px]` β `gap-x-4`
+- `gap-y-[20px]` β `gap-y-5`
+
+### Width/Height
+- `w-[16px]` β `w-4`
+- `h-[32px]` β `h-8`
+- Note: Icon sizes often don't match scale (e.g., `w-[18px]` is correct as-is)
+
+### Space Between
+- `space-x-[8px]` β `space-x-2`
+- `space-y-[12px]` β `space-y-3`
+
+## Workflow
+
+When implementing or reviewing Tailwind utilities:
+
+1. **Verify design value** - Use `mcp_figma_get_variable_defs` to get exact pixel values
+2. **Check scale** - Consult the spacing scale table above
+3. **Simplify if match** - Replace arbitrary value with standard class if exact match exists
+4. **Keep if custom** - Use arbitrary value if no standard class matches
+
+## Common Mistakes to Avoid
+
+β **Don't** simplify values that don't exactly match the scale
+```tsx
+// If design specifies 5px, keep it custom:
+
// WRONG: 1.5 = 6px, not 5px
+
// CORRECT
+```
+
+β **Don't** forget to check all dimensions
+```tsx
+// Check both width AND height:
+ // Inconsistent - both should match design
+```
+
+β **Don't** change values without verifying against design
+```tsx
+// Always verify the design first - don't assume!
+```
+
+## Quick Reference: Most Common Conversions
+
+| Arbitrary Value | Standard Class | Pixels |
+|-----------------|----------------|--------|
+| `[4px]` | `1` | 4px |
+| `[8px]` | `2` | 8px |
+| `[12px]` | `3` | 12px |
+| `[16px]` | `4` | 16px |
+| `[20px]` | `5` | 20px |
+| `[24px]` | `6` | 24px |
+| `[32px]` | `8` | 32px |
+
+Remember: When in doubt, keep the arbitrary value. Accuracy to the design is more important than simplification.
diff --git a/specs/005-comment-upvotes/spec.md b/specs/005-comment-upvotes/spec.md
new file mode 100644
index 00000000..2023e6fd
--- /dev/null
+++ b/specs/005-comment-upvotes/spec.md
@@ -0,0 +1,14 @@
+As a user, I want the ability to upvote a comment. This is done by clicking on the upvote icon.
+
+As a user, I want to see which comments have been upvoted. I can tell which comments I've liked because when I look at the icon, the icon is highlighted for me based on my choice.
+
+As a user, I want to see a list of users that has upvoted that comment by hovering on the related icon.
+
+As a user, when there are upvotes, a number should display next to the icon. If there isn't any upvotes, no number should show, just the icon.
+
+All the requirements for upvoting should exist for downvoting as well.
+
+Designs:
+
+- Feature https://www.figma.com/design/7QW0kJ07DcM36mgQUJ5Dtj/Carton-Case-Management?node-id=1099-7993, https://www.figma.com/design/7QW0kJ07DcM36mgQUJ5Dtj/Carton-Case-Management?node-id=1109-10911&t=9J5VsTJQG0uMQoMN-0
+- Component Set https://www.figma.com/design/7QW0kJ07DcM36mgQUJ5Dtj/Carton-Case-Management?node-id=1109-10682&t=9J5VsTJQG0uMQoMN-4