Instance 5 Documentation Project: Pronghorn - AI-Powered Enterprise Application Platform Focus: Core feature modules (Admin, Audit, Build, BuildBook, Canvas, Collaboration, Dashboard)
- Feature Modules Overview
- Admin Module
- Audit Module
- Build Module
- BuildBook Module
- Canvas Module
- Collaboration Module
- Dashboard Module
- Cross-Module Integration Patterns
Pronghorn organizes features into domain-specific modules aligned with the DESIGN → AUDIT → BUILD workflow:
Project Lifecycle:
DESIGN: Canvas, Requirements, Standards, Tech Stacks
↓
AUDIT: Audit orchestrator, Venn diagrams, Knowledge graph
↓
BUILD: Coding agent, Repository operations, Deployment
src/components/<module>/
├── <Module>Dialog.tsx # Modal for configuration
├── <Module>Viewer.tsx # Main display component
├── <Feature>Panel.tsx # Side panels
├── <Feature>Card.tsx # List item components
└── hooks/use<Module>.ts # Custom hooks (if complex)
Location: src/components/admin/
Components: 1
Administrative controls for project publishing and super-admin features.
Purpose: Publish project to organization gallery or public catalog
Key Features:
- Organization selector (for multi-org users)
- Visibility settings (private, org-only, public)
- Category tagging
- SEO metadata (title, description, keywords)
- Splash image upload
Props:
interface PublishProjectDialogProps {
projectId: string;
shareToken: string | null;
open: boolean;
onOpenChange: (open: boolean) => void;
}Workflow:
- User clicks "Publish" from project menu
- Dialog opens with current settings
- User selects organization + visibility
- Tags added for discoverability
- Calls
update_project_publish_settingsRPC - Project appears in org gallery
Admin Guard:
const { isAdmin } = useAdmin();
if (!isAdmin) {
return <div>Access denied</div>;
}Location: src/components/audit/
Components: 11
Multi-agent audit orchestration for requirements coverage analysis. Compares Dataset 1 (source of truth) against Dataset 2 (implementation) using AI agents.
┌─────────────────────────────────────────┐
│ AuditConfigurationDialog │ ← User starts audit
│ (Select D1 vs D2, settings) │
└─────────────────┬───────────────────────┘
↓
┌─────────────────────────────────────────┐
│ Audit Orchestrator (Edge Function) │ ← Multi-agent coordination
│ - Read datasets in batches │
│ - Create concepts (themes) │
│ - Build knowledge graph │
│ - Generate tesseract (coverage matrix)│
│ - Produce Venn diagram │
└─────────────────┬───────────────────────┘
↓
Real-time Updates
↓
┌─────────────────────────────────────────┐
│ AuditActivityStream │ ← Live progress feed
│ AuditBlackboard │ ← Agent shared memory
│ KnowledgeGraph │ ← Force-directed graph
│ TesseractVisualizer │ ← 3D coverage matrix
│ VennDiagramResults │ ← Final gaps/coverage
└─────────────────────────────────────────┘
Purpose: Configure and launch audit session
Key Configuration Options:
| Setting | Type | Options | Purpose |
|---|---|---|---|
auditMode |
Enum | `comparison | single` |
dataset1Content |
ProjectSelectionResult | Mixed selection | Requirements, artifacts, standards, files |
dataset2Content |
ProjectSelectionResult | Mixed selection | Canvas nodes, files, databases |
consolidationLevel |
Enum | `low | medium |
chunkSize |
Enum | `small | medium |
batchSize |
Enum | `1 | 5 |
mappingMode |
Enum | `one_to_one | one_to_many` |
maxConceptsPerElement |
Number | 1-50 | Max concepts per D1 element (1:many mode) |
enhancedSortEnabled |
Boolean | - | Enable AI-powered result sorting |
ProjectSelectionResult Structure:
interface ProjectSelectionResult {
projectMetadata: { name, description, ... } | null;
artifacts: Array<{ id, content, ai_title }>;
chatSessions: Array<{ id, ai_summary }>;
requirements: Array<{ id, title, description }>;
standards: Array<{ id, code, title }>;
techStacks: Array<{ id, name, type }>;
canvasNodes: Array<{ id, data, type }>;
canvasEdges: Array<{ id, source, target }>;
files: Array<{ id, path, content }>;
databases: Array<{ type, name, columns, sampleData }>;
}Selection UI: Uses <ProjectSelector> component (see DOCS-06)
Workflow:
// 1. User opens dialog
<Button onClick={() => setOpen(true)}>Start Audit</Button>
// 2. Select Dataset 1 (source of truth)
<ProjectSelector
projectId={projectId}
shareToken={shareToken}
multiSelect={true}
allowedCategories={["requirements", "standards", "artifacts"]}
onSelect={(selection) => setDataset1Content(selection)}
/>
// 3. Select Dataset 2 (implementation)
<ProjectSelector
allowedCategories={["files", "canvasNodes", "databases"]}
onSelect={(selection) => setDataset2Content(selection)}
/>
// 4. Configure processing settings
<Select value={consolidationLevel} onValueChange={...}>
<SelectItem value="low">Low (preserve detail)</SelectItem>
<SelectItem value="medium">Medium (balanced)</SelectItem>
<SelectItem value="high">High (aggressive merging)</SelectItem>
</Select>
// 5. Launch audit
onStartAudit({
name: "Requirements vs Implementation Audit",
auditMode: "comparison",
dataset1Content,
dataset2Content,
consolidationLevel: "medium",
chunkSize: "medium",
batchSize: "10",
mappingMode: "one_to_many",
maxConceptsPerElement: 10,
});Backend Call:
const { data: session } = await supabase.rpc('create_audit_session_with_token', {
p_project_id: projectId,
p_token: shareToken,
p_name: config.name,
p_dataset_1_content: config.dataset1Content, // Full JSON
p_dataset_2_content: config.dataset2Content,
p_max_iterations: 100,
});
// Trigger orchestrator
const response = await fetch('/functions/v1/audit-orchestrator', {
method: 'POST',
body: JSON.stringify({
sessionId: session.id,
projectId,
shareToken,
}),
});Purpose: Real-time activity feed showing orchestrator progress
Features:
- Live updates via Supabase real-time channel
- Phase transitions (graph_building → gap_analysis → deep_analysis → synthesis)
- Tool calls logged (read_dataset_item, create_concept, link_concepts, etc.)
- Agent thinking/reasoning display
- Error handling and retry status
Real-Time Subscription:
const channel = supabase
.channel(`audit-${sessionId}`)
.on('broadcast', { event: 'audit_refresh' }, (payload) => {
if (payload.event === 'iteration') {
setCurrentIteration(payload.iteration);
setCurrentPhase(payload.phase);
} else if (payload.event === 'phase_change') {
toast.info(`Phase: ${payload.phaseDisplayName}`);
} else if (payload.event === 'complete') {
setStatus('completed');
}
})
.subscribe();Activity Types:
thinking- Agent reasoningtool_call- Tool execution attemptsuccess- Tool succeedederror- Tool failedphase_change- Workflow phase transitionllm_request- AI model requestllm_response- AI model response
UI Pattern:
<ScrollArea className="h-[600px]">
{activities.map((activity) => (
<ActivityItem key={activity.id}>
<ActivityIcon type={activity.activity_type} />
<ActivityTitle>{activity.title}</ActivityTitle>
<ActivityTimestamp>{formatDistanceToNow(activity.created_at)}</ActivityTimestamp>
{activity.content && (
<Collapsible>
<CollapsibleTrigger>Show details</CollapsibleTrigger>
<CollapsibleContent>
<pre>{activity.content}</pre>
</CollapsibleContent>
</Collapsible>
)}
</ActivityItem>
))}
</ScrollArea>Purpose: Display shared agent memory (blackboard pattern)
What is the Blackboard?
Multi-agent coordination pattern where agents write observations/findings to a shared memory space:
Agent 1 writes: "Found 5 authentication requirements in D1"
Agent 2 reads blackboard, writes: "D2 has login.ts and session.ts files"
Agent 3 synthesizes: "Authentication partially implemented (60% coverage)"
Entry Types:
plan- High-level strategyfinding- Discovery/observationobservation- Pattern noticedquestion- Unresolved issueconclusion- Final determinationtool_result- Tool execution output
UI Features:
// Filter by entry type
<Tabs>
<TabsList>
<TabsTrigger value="all">All ({entries.length})</TabsTrigger>
<TabsTrigger value="finding">Findings ({findings.length})</TabsTrigger>
<TabsTrigger value="conclusion">Conclusions</TabsTrigger>
</TabsList>
</Tabs>
// Entry display with metadata
<Card>
<Badge>{entry.entry_type}</Badge>
<div className="text-sm text-muted-foreground">
{entry.agent_role} • Iteration {entry.iteration}
</div>
<ReactMarkdown>{entry.content}</ReactMarkdown>
{entry.confidence && (
<ProgressBar value={entry.confidence * 100} />
)}
</Card>Purpose: Visualize concept relationships as force-directed graph
Two Implementations:
| Component | Renderer | Use Case | Max Nodes |
|---|---|---|---|
| KnowledgeGraph | react-force-graph-2d |
Interactive exploration | ~500 |
| KnowledgeGraphWebGL | WebGL custom | High performance | ~10,000 |
Graph Structure:
Node Types:
d1_element- Dataset 1 items (requirements, standards)d2_element- Dataset 2 items (files, canvas nodes)concept- Extracted themes/patternscluster- Merged concept groups
Edge Types:
relates_to- Conceptual relationshipimplements- D2 implements D1depends_on- Dependencyconflicts_with- Contradictionsupports- Supporting evidencecovers- Coverage relationshipderived_from- Concept derived from element
Example Graph:
Concept: "Authentication"
├─ derived_from → Requirement: "User Login" (D1)
├─ derived_from → Requirement: "Session Management" (D1)
├─ implemented_by → File: "auth/login.ts" (D2)
└─ implemented_by → File: "auth/session.ts" (D2)
Concept: "Password Reset"
├─ derived_from → Requirement: "Forgot Password" (D1)
└─ (NO D2 EDGES) → GAP IDENTIFIED
Interactive Features:
// Click node to highlight connected nodes
onNodeClick={(node) => {
const connectedNodeIds = edges
.filter(e => e.source === node.id || e.target === node.id)
.flatMap(e => [e.source, e.target]);
setHighlightedNodes(connectedNodeIds);
}}
// Zoom to fit
<Button onClick={() => graphRef.current?.zoomToFit(400)}>
Fit to View
</Button>
// Filter by dataset
<Tabs>
<TabsTrigger onClick={() => setFilter('d1_only')}>D1 Only</TabsTrigger>
<TabsTrigger onClick={() => setFilter('d2_only')}>D2 Only</TabsTrigger>
<TabsTrigger onClick={() => setFilter('concepts')}>Concepts</TabsTrigger>
</Tabs>Force Simulation Parameters:
forceEngine="d3"
d3AlphaDecay={0.02}
d3VelocityDecay={0.3}
cooldownTicks={100}
nodeRelSize={6}
linkDirectionalArrowLength={3.5}Purpose: 3D coverage quality matrix (Tesseract)
What is a Tesseract?
A 4D data structure flattened to 3D for visualization:
Dimensions:
X-axis: D1 Elements (requirements, standards)
Y-axis: Analysis Steps (1-5)
Z-axis: Polarity (coverage quality: -1 to +1)
Color: Criticality (critical, major, minor, info)
Analysis Steps (Y-axis):
- Identify: Does D2 contain corresponding elements?
- Complete: Is implementation complete?
- Correct: Is implementation accurate?
- Quality: Is quality acceptable?
- Integrate: Does it integrate properly?
Cell Data Structure:
interface TesseractCell {
x_element_id: string; // D1 element UUID
x_element_label: string; // "User Login"
x_index: number; // Position in D1 list (0-based)
y_step: number; // Analysis step (1-5)
y_step_label: string; // "Identify"
z_polarity: number; // -1 (gap) to +1 (perfect)
z_criticality: 'critical' | 'major' | 'minor' | 'info';
evidence_summary: string; // "Found login.ts, missing 2FA"
contributing_agents: string[]; // ["orchestrator", "security_agent"]
}Visualization:
// 3D scatter plot using recharts or custom WebGL
<ScatterChart>
<XAxis type="number" dataKey="x_index" name="Requirement" />
<YAxis type="number" dataKey="y_step" name="Step" />
<ZAxis type="number" dataKey="z_polarity" name="Quality" />
<Scatter data={cells} fill={(cell) => getCriticalityColor(cell.z_criticality)} />
</ScatterChart>
// Criticality color mapping
function getCriticalityColor(criticality: string) {
switch(criticality) {
case 'critical': return '#ef4444'; // red-500
case 'major': return '#f97316'; // orange-500
case 'minor': return '#eab308'; // yellow-500
case 'info': return '#3b82f6'; // blue-500
}
}Interaction:
- Hover cell → show tooltip with evidence
- Click cell → jump to D1 element + D2 file
- Filter by criticality
- Slice by analysis step
Purpose: Final audit results visualization (3-circle Venn diagram)
Venn Structure:
┌─────────────────────────────────────────┐
│ │
│ ┌─────────────┐ │
│ │ D1 Only │ ← Gaps (unimplemented)
│ │ (Gaps) │ │
│ └──────┬──────┘ │
│ │ │
│ ┌────┴────┐ │
│ │ Aligned │ ← Coverage (matched) │
│ │(Coverage)│ │
│ └────┬────┘ │
│ │ │
│ ┌──────┴──────┐ │
│ │ D2 Only │ ← Orphans (extra impl)
│ │ (Orphans) │ │
│ └─────────────┘ │
└─────────────────────────────────────────┘
Result Data:
interface VennDiagramResult {
unique_to_d1: Array<{ // GAPS
id: string;
label: string;
category: string;
criticality: 'critical' | 'major' | 'minor' | 'info';
evidence: string;
polarity: number; // Always negative for gaps
}>;
aligned: Array<{ // COVERAGE
id: string;
label: string;
d1_element: string;
d2_element: string;
polarity: number; // 0 to 1 (quality score)
}>;
unique_to_d2: Array<{ // ORPHANS
id: string;
label: string;
category: string;
evidence: string; // Why no D1 match
}>;
summary: {
total_d1_coverage: number; // % of D1 covered
total_d2_coverage: number; // % of D2 linked
alignment_score: number; // Overall quality (0-100)
gaps: number; // Count
orphans: number; // Count
aligned: number; // Count
};
}UI Features:
// Summary cards
<div className="grid grid-cols-3 gap-4">
<Card>
<CardTitle>Coverage</CardTitle>
<div className="text-4xl font-bold">{summary.total_d1_coverage}%</div>
<Progress value={summary.total_d1_coverage} />
</Card>
<Card className="border-red-500">
<CardTitle>Gaps</CardTitle>
<div className="text-4xl font-bold text-red-500">{summary.gaps}</div>
<p className="text-sm">Critical: {criticalGaps.length}</p>
</Card>
<Card className="border-yellow-500">
<CardTitle>Orphans</CardTitle>
<div className="text-4xl font-bold text-yellow-500">{summary.orphans}</div>
</Card>
</div>
// Detailed lists with filtering
<Tabs>
<TabsList>
<TabsTrigger value="gaps">
Gaps ({unique_to_d1.length})
{criticalGaps.length > 0 && (
<Badge variant="destructive">{criticalGaps.length} critical</Badge>
)}
</TabsTrigger>
<TabsTrigger value="aligned">Aligned ({aligned.length})</TabsTrigger>
<TabsTrigger value="orphans">Orphans ({unique_to_d2.length})</TabsTrigger>
</TabsList>
<TabsContent value="gaps">
<FindingsTable
findings={unique_to_d1}
sortBy="criticality"
onRowClick={(gap) => navigateToRequirement(gap.id)}
/>
</TabsContent>
</Tabs>Purpose: Sortable, filterable table for audit findings
Columns:
- Severity Badge (critical/major/minor/info)
- Element Label
- Category (requirement, standard, file, etc.)
- Evidence/Description
- Polarity Score (-1 to +1)
- Actions (View details, Link to source)
Features:
- Multi-column sorting
- Search filter
- Criticality filter
- Export to CSV/Excel
- Pagination
Purpose: Visual progress indicators for audit results
Chart Types:
- Donut Chart: Coverage % (aligned vs gaps)
- Bar Chart: Coverage by category (auth, data, UI, etc.)
- Radar Chart: Quality dimensions (completeness, correctness, integration)
- Timeline: Coverage trend over iterations
Recharts Integration:
<ResponsiveContainer width="100%" height={300}>
<PieChart>
<Pie
data={[
{ name: 'Covered', value: aligned.length, fill: '#22c55e' },
{ name: 'Gaps', value: gaps.length, fill: '#ef4444' },
{ name: 'Orphans', value: orphans.length, fill: '#eab308' },
]}
dataKey="value"
nameKey="name"
cx="50%"
cy="50%"
innerRadius={60}
outerRadius={80}
label
/>
<Tooltip />
</PieChart>
</ResponsiveContainer>Data Flow:
AuditConfigurationDialog
↓ (user submits)
create_audit_session_with_token() RPC
↓
audit-orchestrator Edge Function
↓ (broadcasts events)
Supabase Realtime Channel
↓ (subscriptions)
├─ AuditActivityStream (updates)
├─ AuditBlackboard (reads entries)
├─ KnowledgeGraph (queries nodes/edges)
├─ TesseractVisualizer (queries cells)
└─ VennDiagramResults (queries venn_result)
Location: src/components/build/
Components: 12
Autonomous AI coding agent for repository operations (read, edit, create, delete files).
┌──────────────────────────────────────────┐
│ UnifiedAgentInterface │ ← Main UI
│ - Chat input │
│ - Attached files selector │
│ - Context attachment (ProjectSelector) │
│ - Auto-commit toggle │
└──────────────────┬───────────────────────┘
↓
┌──────────────────────────────────────────┐
│ coding-agent-orchestrator │ ← Edge Function
│ - Parse task description │
│ - Generate operation plan │
│ - Execute file operations │
│ - Stream progress │
└──────────────────┬───────────────────────┘
↓
┌──────────────────────────────────────────┐
│ Real-time UI Updates │
│ - AgentChatViewer (messages) │
│ - AgentProgressMonitor (status) │
│ - AgentFileTree (changed files) │
│ - DiffViewer (before/after) │
│ - StagingPanel (git staging area) │
└──────────────────────────────────────────┘
Purpose: Single-pane coding agent chat interface
Key Features:
a) Chat Interface:
<Textarea
placeholder="Describe what you want to build..."
value={taskInput}
onChange={(e) => setTaskInput(e.target.value)}
onKeyDown={(e) => {
if (e.key === 'Enter' && !e.shiftKey) {
handleSubmit();
}
}}
/>
<Button onClick={handleSubmit} disabled={isSubmitting}>
{isSubmitting ? <Loader2 className="animate-spin" /> : <Send />}
Send
</Button>b) File Attachment:
<AgentFileTree
files={files}
attachedFiles={attachedFiles}
onToggleAttach={(fileId) => {
if (attachedFiles.some(f => f.id === fileId)) {
onRemoveFile(fileId);
} else {
setAttachedFiles([...attachedFiles, files.find(f => f.id === fileId)]);
}
}}
/>
// Attached files chip display
{attachedFiles.map(file => (
<Badge key={file.id}>
<FileText className="w-3 h-3 mr-1" />
{file.path}
<X className="w-3 h-3 ml-1 cursor-pointer" onClick={() => onRemoveFile(file.id)} />
</Badge>
))}c) Context Attachment (Full Project Context):
<Button onClick={() => setIsProjectSelectorOpen(true)}>
<Paperclip className="w-4 h-4 mr-2" />
Attach Context {attachedContext && `(${countItems(attachedContext)} items)`}
</Button>
<ProjectSelector
projectId={projectId}
shareToken={shareToken}
multiSelect={true}
allowedCategories={["requirements", "standards", "artifacts", "files", "databases"]}
onSelect={(selection) => setAttachedContext(selection)}
/>d) Chat History Settings:
<Collapsible>
<CollapsibleTrigger>Chat History Settings</CollapsibleTrigger>
<CollapsibleContent>
<Switch
checked={chatHistorySettings.includeHistory}
onCheckedChange={(checked) =>
setChatHistorySettings({...chatHistorySettings, includeHistory: checked})
}
/>
<Label>Include chat history in context</Label>
<RadioGroup value={chatHistorySettings.durationType}>
<RadioGroupItem value="time">Last N minutes</RadioGroupItem>
<RadioGroupItem value="messages">Last N messages</RadioGroupItem>
</RadioGroup>
<Input
type="number"
value={chatHistorySettings.durationValue}
onChange={(e) =>
setChatHistorySettings({...chatHistorySettings, durationValue: parseInt(e.target.value)})
}
/>
<Select value={chatHistorySettings.verbosity}>
<SelectItem value="minimal">Minimal (user messages only)</SelectItem>
<SelectItem value="standard">Standard (user + assistant summaries)</SelectItem>
<SelectItem value="detailed">Detailed (full conversation)</SelectItem>
</Select>
</CollapsibleContent>
</Collapsible>e) Auto-Commit Toggle:
<Checkbox
checked={autoCommit}
onCheckedChange={onAutoCommitChange}
/>
<Label>Auto-commit after operations</Label>Message Submission:
const handleSubmit = async () => {
// Optimistic UI update
const userMessage = {
role: 'user',
content: taskInput,
created_at: new Date().toISOString(),
};
setMessages([...messages, userMessage]);
setTaskInput('');
setIsSubmitting(true);
try {
// Call coding agent
const response = await fetch('/functions/v1/coding-agent-orchestrator', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
projectId,
repoId,
shareToken,
taskDescription: taskInput,
attachedFiles: attachedFiles.map(f => ({ id: f.id, path: f.path })),
projectContext: attachedContext,
mode: 'task',
autoCommit,
chatHistory: buildChatHistory(messages, chatHistorySettings),
}),
});
const result = await response.json();
// Add agent response
setMessages([...messages, userMessage, {
role: 'assistant',
content: result.reasoning || result.summary,
created_at: new Date().toISOString(),
metadata: {
operations: result.operations,
filesChanged: result.operations.filter(op => op.type === 'edit_lines').length,
}
}]);
// Refresh file tree
refetchFiles();
} catch (error) {
toast.error('Agent failed: ' + error.message);
} finally {
setIsSubmitting(false);
}
};Purpose: Display agent conversation with operation badges
Message Rendering:
{messages.map((msg) => (
<div className={`flex ${msg.role === 'user' ? 'justify-end' : 'justify-start'}`}>
<Card className={msg.role === 'user' ? 'bg-primary' : 'bg-muted'}>
<div className="flex items-start gap-2">
{msg.role === 'user' ? <User /> : <Bot />}
<div>
<ReactMarkdown remarkPlugins={[remarkGfm]}>
{msg.content}
</ReactMarkdown>
{/* Operation badges */}
{msg.metadata?.operations && (
<div className="flex flex-wrap gap-1 mt-2">
{msg.metadata.operations.map((op, i) => (
<Badge key={i} variant={getOperationVariant(op.type)}>
{getOperationIcon(op.type)}
{op.type}
</Badge>
))}
</div>
)}
</div>
</div>
</Card>
</div>
))}Operation Type Mapping:
function getOperationIcon(type: string) {
switch(type) {
case 'read_file': return <FileText />;
case 'edit_lines': return <FileEdit />;
case 'create_file': return <FilePlus />;
case 'delete_file': return <FileX />;
case 'search': return <FolderSearch />;
default: return <Wrench />;
}
}Purpose: Real-time status of agent operations
Status Types:
idle- Waiting for taskthinking- Planning operationsexecuting- Running file operationscommitting- Git commit in progresscompleted- Task finishederror- Failed
UI:
<Card>
<div className="flex items-center gap-2">
{status === 'executing' && <Loader2 className="animate-spin" />}
{status === 'completed' && <CheckCircle className="text-green-500" />}
{status === 'error' && <XCircle className="text-red-500" />}
<span className="font-semibold">{statusLabels[status]}</span>
</div>
{currentOperation && (
<div className="text-sm text-muted-foreground mt-2">
{currentOperation.type}: {currentOperation.params.path || currentOperation.params.file_id}
</div>
)}
<Progress value={progress} className="mt-2" />
<div className="text-xs text-muted-foreground">
{completedOps}/{totalOps} operations
</div>
</Card>Purpose: File browser with change indicators
Features:
- Hierarchical file tree
- Change badges (modified, created, deleted)
- Checkbox for attachment
- Staging status indicator
- Search/filter
File Item Rendering:
<TreeItem
nodeId={file.id}
label={
<div className="flex items-center justify-between w-full">
<div className="flex items-center gap-2">
<Checkbox
checked={isAttached}
onCheckedChange={() => onToggleAttach(file.id)}
/>
<FileIcon extension={getExtension(file.path)} />
<span>{file.path}</span>
</div>
<div className="flex gap-1">
{file.isModified && <Badge variant="warning">M</Badge>}
{file.isCreated && <Badge variant="success">A</Badge>}
{file.isStaged && <GitCommit className="w-4 h-4 text-primary" />}
</div>
</div>
}
/>Purpose: Side-by-side file diff viewer
Features:
- Syntax highlighting via Monaco Editor
- Inline diff mode
- Side-by-side diff mode
- Line number mapping
- Change navigation (prev/next change)
Implementation:
import { DiffEditor } from '@monaco-editor/react';
<DiffEditor
original={originalContent}
modified={modifiedContent}
language={detectLanguage(filePath)}
theme="vs-dark"
options={{
renderSideBySide: true,
readOnly: true,
minimap: { enabled: false },
scrollBeyondLastLine: false,
}}
/>Alternative: Using diff library for text-based diff:
import * as Diff from 'diff';
const changes = Diff.diffLines(original, modified);
<div>
{changes.map((change, i) => (
<div
key={i}
className={cn(
change.added && 'bg-green-500/20',
change.removed && 'bg-red-500/20'
)}
>
<span className="text-muted-foreground">{lineNumber}</span>
<pre>{change.value}</pre>
</div>
))}
</div>Purpose: Git staging area management
Features:
- List staged files
- Stage/unstage individual files
- Stage all changes
- Discard changes
- Commit with message
UI:
<Tabs>
<TabsList>
<TabsTrigger value="staged">Staged ({stagedFiles.length})</TabsTrigger>
<TabsTrigger value="unstaged">Changes ({unstagedFiles.length})</TabsTrigger>
</TabsList>
<TabsContent value="staged">
{stagedFiles.map(file => (
<div className="flex items-center justify-between">
<span>{file.path}</span>
<Button size="sm" variant="ghost" onClick={() => unstageFile(file.id)}>
Unstage
</Button>
</div>
))}
<Button onClick={openCommitDialog}>
<GitCommit className="mr-2" />
Commit ({stagedFiles.length} files)
</Button>
</TabsContent>
<TabsContent value="unstaged">
{unstagedFiles.map(file => (
<div>
<Checkbox
checked={false}
onCheckedChange={() => stageFile(file.id)}
/>
<span>{file.path}</span>
<Badge>{file.change_type}</Badge>
</div>
))}
<Button onClick={stageAll}>Stage All</Button>
</TabsContent>
</Tabs>Purpose: Git commit log viewer
Features:
- Commit list with messages
- Author, timestamp
- Changed files per commit
- Diff viewer (on click)
- Branch visualization
Purpose: Agent execution logs
Log Types:
- Operation logs (file operations)
- LLM request/response logs
- Error logs
- Debug logs
Purpose: Debug view for raw AI model interactions
Features:
- Full request payload
- Full response payload
- Token usage stats
- Timing metrics
- Copy to clipboard
UI:
<Tabs>
<TabsList>
<TabsTrigger value="request">Request</TabsTrigger>
<TabsTrigger value="response">Response</TabsTrigger>
<TabsTrigger value="metadata">Metadata</TabsTrigger>
</TabsList>
<TabsContent value="request">
<pre className="text-xs overflow-auto">
{JSON.stringify(llmRequest, null, 2)}
</pre>
<Button onClick={() => navigator.clipboard.writeText(JSON.stringify(llmRequest))}>
Copy Request
</Button>
</TabsContent>
<TabsContent value="metadata">
<div>
<strong>Model:</strong> {metadata.model}
</div>
<div>
<strong>Tokens Used:</strong> {metadata.usage.prompt_tokens} in + {metadata.usage.completion_tokens} out
</div>
<div>
<strong>Latency:</strong> {metadata.latency}ms
</div>
</TabsContent>
</Tabs>Agent Operation Lifecycle:
- User Input: Task description + attached files + context
- Agent Planning: LLM generates operation plan
{ "reasoning": "To add a login form, I need to...", "operations": [ {"type": "create_file", "params": {"path": "src/components/LoginForm.tsx", "content": "..."}}, {"type": "edit_lines", "params": {"file_id": "abc123", "edits": [{...}]}}, {"type": "read_file", "params": {"file_id": "def456"}} ] } - Execution: Operations run sequentially
- Staging: Changed files auto-staged (if autoCommit enabled)
- Commit: Auto-commit with AI-generated message (optional)
- Response: Results displayed in chat
Real-Time Updates:
// Subscribe to operation updates
const { data: operations } = await supabase
.from('coding_agent_operations')
.select('*')
.eq('session_id', sessionId)
.order('created_at', { ascending: true });
// Real-time subscription
supabase
.channel(`operations-${sessionId}`)
.on('postgres_changes', {
event: 'INSERT',
schema: 'public',
table: 'coding_agent_operations',
}, (payload) => {
setOperations(ops => [...ops, payload.new]);
})
.subscribe();Location: src/components/buildbook/
Components: 5
AI-powered build plan generator ("BuildBook") - creates comprehensive implementation guides from requirements.
BuildBookCard (Gallery)
↓ (select template)
ApplyBuildBookDialog
↓ (user configures)
presentation-agent Edge Function
↓ (generates BuildBook)
BuildBookDocsViewer
↓ (displays slides/sections)
BuildBookChat (interactive Q&A)
Purpose: Template card in BuildBook gallery
Template Types:
- Agile Sprint Plan: User stories → sprint tasks
- Technical Architecture: Requirements → component design
- Implementation Roadmap: Epics → milestone plan
- Testing Strategy: Requirements → test cases
Card Display:
<Card>
<BuildBookCoverUpload
coverImageUrl={template.cover_image_url}
onUpload={(url) => updateTemplate({ ...template, cover_image_url: url })}
/>
<CardHeader>
<CardTitle>{template.name}</CardTitle>
<CardDescription>{template.description}</CardDescription>
</CardHeader>
<CardContent>
<Badge>{template.category}</Badge>
<div className="text-sm">
{template.slide_count} slides • {template.estimated_time}
</div>
</CardContent>
<CardFooter>
<Button onClick={() => applyTemplate(template.id)}>
Use Template
</Button>
</CardFooter>
</Card>Purpose: Configure BuildBook generation
Configuration:
interface BuildBookConfig {
templateId: string;
projectContext: ProjectSelectionResult; // Requirements to include
outputFormat: 'slides' | 'document' | 'both';
detailLevel: 'summary' | 'detailed' | 'comprehensive';
includeDiagrams: boolean;
includeCodeSamples: boolean;
}Workflow:
<Dialog>
<DialogHeader>
<DialogTitle>Generate BuildBook: {template.name}</DialogTitle>
</DialogHeader>
<ProjectSelector
projectId={projectId}
shareToken={shareToken}
multiSelect={true}
allowedCategories={["requirements", "standards", "techStacks"]}
onSelect={(selection) => setProjectContext(selection)}
/>
<Select value={detailLevel} onValueChange={setDetailLevel}>
<SelectItem value="summary">Summary (10-15 slides)</SelectItem>
<SelectItem value="detailed">Detailed (20-30 slides)</SelectItem>
<SelectItem value="comprehensive">Comprehensive (40+ slides)</SelectItem>
</Select>
<Checkbox checked={includeDiagrams} onCheckedChange={setIncludeDiagrams}>
Include architecture diagrams
</Checkbox>
<Button onClick={handleGenerate} disabled={isGenerating}>
{isGenerating ? <Loader2 className="animate-spin" /> : 'Generate BuildBook'}
</Button>
</Dialog>Generation Call:
const response = await fetch('/functions/v1/presentation-agent', {
method: 'POST',
body: JSON.stringify({
projectId,
shareToken,
templateId: config.templateId,
projectContext: config.projectContext,
detailLevel: config.detailLevel,
includeDiagrams: config.includeDiagrams,
}),
});
const { slideIds } = await response.json();
// Slides stored in presentation_slides tablePurpose: Display generated BuildBook slides
Features:
- Slide navigation (prev/next)
- Thumbnail sidebar
- Export to PDF/PPTX/DOCX
- Print mode
Slide Rendering:
<div className="buildbook-viewer">
{/* Slide navigation */}
<div className="flex justify-between items-center">
<Button onClick={prevSlide} disabled={currentSlide === 0}>
Previous
</Button>
<span>Slide {currentSlide + 1} / {slides.length}</span>
<Button onClick={nextSlide} disabled={currentSlide === slides.length - 1}>
Next
</Button>
</div>
{/* Current slide */}
<Card className="slide-container">
<h2>{slides[currentSlide].title}</h2>
<ReactMarkdown>{slides[currentSlide].content}</ReactMarkdown>
{slides[currentSlide].diagram_url && (
<img src={slides[currentSlide].diagram_url} alt="Diagram" />
)}
</Card>
{/* Thumbnail sidebar */}
<ScrollArea className="thumbnails">
{slides.map((slide, i) => (
<Card
key={slide.id}
className={cn('thumbnail', i === currentSlide && 'selected')}
onClick={() => setCurrentSlide(i)}
>
<div className="text-xs">{i + 1}</div>
<div className="text-xs truncate">{slide.title}</div>
</Card>
))}
</ScrollArea>
</div>Purpose: Interactive Q&A about the BuildBook
Features:
- Ask questions about implementation steps
- Request clarifications
- Generate code snippets from slides
- Export conversation
Chat Interface:
<CollaborationChat
messages={messages}
blackboard={[]} // Not used for BuildBook
isStreaming={isStreaming}
streamingContent={streamingContent}
onSendMessage={async (question) => {
// Call collaboration-agent with BuildBook context
const response = await fetch('/functions/v1/collaboration-agent-orchestrator', {
method: 'POST',
body: JSON.stringify({
projectId,
shareToken,
chatHistory: messages,
attachedContext: {
presentations: [{ slides }] // Include all slides as context
},
}),
});
// Stream response
const reader = response.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = new TextDecoder().decode(value);
setStreamingContent(prev => prev + text);
}
}}
/>Purpose: Custom cover image upload for templates
Features:
- Drag & drop upload
- Image cropping
- Preview
- Remove cover
Implementation:
<div
onDrop={handleDrop}
onDragOver={(e) => e.preventDefault()}
className="border-2 border-dashed border-muted rounded-lg p-8 text-center cursor-pointer hover:border-primary"
>
{coverImageUrl ? (
<div className="relative">
<img src={coverImageUrl} alt="Cover" className="max-h-48 mx-auto" />
<Button
variant="destructive"
size="sm"
className="absolute top-2 right-2"
onClick={handleRemove}
>
<X className="w-4 h-4" />
</Button>
</div>
) : (
<div>
<ImageIcon className="w-12 h-12 mx-auto text-muted-foreground" />
<p className="mt-2">Drop cover image here or click to upload</p>
<Input
type="file"
accept="image/*"
onChange={handleFileSelect}
className="hidden"
ref={fileInputRef}
/>
</div>
)}
</div>Upload to Storage:
const handleDrop = async (e: React.DragEvent) => {
const file = e.dataTransfer.files[0];
if (!file || !file.type.startsWith('image/')) return;
// Upload to Supabase Storage
const { data, error } = await supabase.storage
.from('buildbook-covers')
.upload(`${projectId}/${Date.now()}_${file.name}`, file);
if (error) {
toast.error('Upload failed');
return;
}
const publicUrl = supabase.storage
.from('buildbook-covers')
.getPublicUrl(data.path).data.publicUrl;
onUpload(publicUrl);
};Location: src/components/canvas/
Components: 21
Visual architecture design using ReactFlow - the "Design Mode" of Pronghorn.
ReactFlow-based interactive canvas with custom nodes, edges, and controls.
AgentFlow (Main Canvas)
├─ Custom Nodes
│ ├─ CanvasNode (standard architecture nodes)
│ ├─ LabelNode (text labels)
│ ├─ NotesNode (sticky notes)
│ └─ ZoneNode (grouping container)
├─ Custom Edges
│ └─ CustomEdge (labeled, animated)
├─ Controls
│ ├─ CanvasPalette (node selector)
│ ├─ LayersManager (layer visibility)
│ ├─ NodePropertiesPanel (edit selected node)
│ └─ EdgePropertiesPanel (edit selected edge)
└─ AI Features
├─ AIArchitectDialog (auto-generate architecture)
├─ IterativeEnhancement (refine with AI)
└─ InfographicDialog (export as infographic)
Purpose: ReactFlow wrapper with multi-agent orchestration UI
Features:
- Drag & drop node placement
- Edge connections with handles
- Zoom/pan controls
- Minimap
- Node selection (single, multi, lasso)
- Copy/paste
- Undo/redo
- Auto-layout (hierarchical, force-directed)
ReactFlow Integration:
import ReactFlow, {
Node,
Edge,
useNodesState,
useEdgesState,
addEdge,
Background,
Controls,
MiniMap,
} from 'reactflow';
import 'reactflow/dist/style.css';
const nodeTypes = {
canvasNode: CanvasNode,
labelNode: LabelNode,
notesNode: NotesNode,
zoneNode: ZoneNode,
};
const edgeTypes = {
custom: CustomEdge,
};
export function AgentFlow({ projectId, shareToken }) {
const [nodes, setNodes, onNodesChange] = useNodesState([]);
const [edges, setEdges, onEdgesChange] = useEdgesState([]);
const [selectedNodes, setSelectedNodes] = useState<Node[]>([]);
// Load canvas from database
useEffect(() => {
loadCanvas();
}, [projectId]);
const loadCanvas = async () => {
const { data: canvasNodes } = await supabase
.from('canvas_nodes')
.select('*')
.eq('project_id', projectId);
const { data: canvasEdges } = await supabase
.from('canvas_edges')
.select('*')
.eq('project_id', projectId);
setNodes(canvasNodes.map(toReactFlowNode));
setEdges(canvasEdges.map(toReactFlowEdge));
};
// Handle connection creation
const onConnect = useCallback((connection: Connection) => {
// Validate connection (enforce flow direction)
if (!isValidConnection(connection)) {
toast.error('Invalid connection: edges must flow left to right');
return;
}
setEdges((eds) => addEdge(connection, eds));
// Save to database
supabase.from('canvas_edges').insert({
project_id: projectId,
source_id: connection.source,
target_id: connection.target,
label: connection.label,
});
}, []);
// Handle node drag end (save position)
const onNodeDragStop = useCallback((event, node: Node) => {
supabase
.from('canvas_nodes')
.update({ position: { x: node.position.x, y: node.position.y } })
.eq('id', node.id);
}, []);
return (
<div className="w-full h-screen">
<ReactFlow
nodes={nodes}
edges={edges}
onNodesChange={onNodesChange}
onEdgesChange={onEdgesChange}
onConnect={onConnect}
onNodeDragStop={onNodeDragStop}
onSelectionChange={({ nodes }) => setSelectedNodes(nodes)}
nodeTypes={nodeTypes}
edgeTypes={edgeTypes}
fitView
snapToGrid
snapGrid={[15, 15]}
>
<Background variant="dots" gap={15} size={1} />
<Controls />
<MiniMap />
</ReactFlow>
{/* Side panels */}
<CanvasPalette onAddNode={addNode} />
<LayersManager layers={layers} onToggleLayer={toggleLayer} />
{selectedNodes.length === 1 && (
<NodePropertiesPanel
node={selectedNodes[0]}
onUpdate={updateNode}
/>
)}
</div>
);
}Purpose: Reusable architecture node with type-specific styling
Node Data Structure:
interface CanvasNodeData {
label: string;
type: string; // WEB_COMPONENT, API_ROUTER, DATABASE, etc.
subtitle?: string;
description?: string;
color?: string;
icon?: string;
metadata?: Record<string, any>;
}Node Rendering:
function CanvasNode({ data, id, selected }: { data: CanvasNodeData; id: string; selected: boolean }) {
const nodeTypeConfig = getNodeTypeConfig(data.type);
return (
<div className="relative">
{/* Connection handles */}
<Handle type="target" position={Position.Left} className="w-3 h-3" />
<Handle type="source" position={Position.Right} className="w-3 h-3" />
<Card
className={cn(
'min-w-[200px] p-4 rounded-lg shadow-md transition-all',
selected && 'ring-2 ring-primary',
'hover:shadow-lg'
)}
style={{
backgroundColor: nodeTypeConfig.color,
borderColor: nodeTypeConfig.color,
}}
>
{/* Header with icon */}
<div className="flex items-center gap-2 mb-2">
{nodeTypeConfig.icon && (
<div className="w-8 h-8 rounded flex items-center justify-center bg-white/20">
{nodeTypeConfig.icon}
</div>
)}
<div>
<div className="font-semibold">{data.label}</div>
{data.subtitle && (
<div className="text-xs opacity-80">{data.subtitle}</div>
)}
</div>
</div>
{/* Description */}
{data.description && (
<div className="text-sm opacity-90 mt-2">
{data.description}
</div>
)}
{/* Node type badge */}
<Badge variant="secondary" className="mt-2">
{nodeTypeConfig.display_label}
</Badge>
</Card>
</div>
);
}Node Type Configuration (from database):
// Loaded from canvas_node_types table
const nodeTypeConfigs = {
WEB_COMPONENT: {
display_label: 'Web Component',
color: '#3b82f6', // blue-500
icon: <Component />,
category: 'Frontend',
order_score: 100,
},
API_ROUTER: {
display_label: 'API Router',
color: '#22c55e', // green-500
icon: <Route />,
category: 'Backend',
order_score: 300,
},
DATABASE: {
display_label: 'Database',
color: '#a855f7', // purple-500
icon: <Database />,
category: 'Data',
order_score: 600,
},
// ... 21+ more types
};Purpose: Drag-and-drop node palette
UI:
<ScrollArea className="h-full">
<div className="p-4">
<h3 className="font-semibold mb-4">Components</h3>
{/* Group by category */}
{Object.entries(groupedNodeTypes).map(([category, types]) => (
<Collapsible key={category} defaultOpen={category === 'Frontend'}>
<CollapsibleTrigger className="flex items-center justify-between w-full">
<span>{category}</span>
<ChevronDown className="w-4 h-4" />
</CollapsibleTrigger>
<CollapsibleContent>
<div className="grid grid-cols-2 gap-2 mt-2">
{types.map((nodeType) => (
<Card
key={nodeType.system_name}
draggable
onDragStart={(e) => {
e.dataTransfer.setData('application/reactflow', JSON.stringify({
type: 'canvasNode',
data: {
label: nodeType.display_label,
type: nodeType.system_name,
}
}));
}}
className="p-2 cursor-move hover:bg-accent"
style={{ borderLeft: `4px solid ${nodeType.color_class}` }}
>
<div className="flex flex-col items-center">
{nodeType.icon}
<span className="text-xs mt-1">{nodeType.display_label}</span>
</div>
</Card>
))}
</div>
</CollapsibleContent>
</Collapsible>
))}
</div>
</ScrollArea>Drop Handler (in AgentFlow):
const onDrop = useCallback((event: React.DragEvent) => {
const reactFlowBounds = reactFlowWrapper.current.getBoundingClientRect();
const data = JSON.parse(event.dataTransfer.getData('application/reactflow'));
const position = reactFlowInstance.project({
x: event.clientX - reactFlowBounds.left,
y: event.clientY - reactFlowBounds.top,
});
const newNode = {
id: crypto.randomUUID(),
type: data.type,
position,
data: data.data,
};
setNodes((nds) => nds.concat(newNode));
// Save to database
supabase.from('canvas_nodes').insert({
id: newNode.id,
project_id: projectId,
type: data.data.type,
position,
data: data.data,
});
}, []);Purpose: Auto-generate architecture from description
Workflow:
<Dialog>
<DialogHeader>
<DialogTitle>AI Architect</DialogTitle>
<DialogDescription>
Describe your application and AI will generate an architecture diagram
</DialogDescription>
</DialogHeader>
<Textarea
placeholder="Example: A social media app with user profiles, posts, comments, real-time chat, and admin dashboard"
value={description}
onChange={(e) => setDescription(e.target.value)}
rows={6}
/>
<ProjectSelector
projectId={projectId}
shareToken={shareToken}
onSelect={(selection) => setAttachedContext(selection)}
/>
<p className="text-sm text-muted-foreground">
Attach requirements, standards, or existing nodes to inform the AI
</p>
<Checkbox checked={drawEdges} onCheckedChange={setDrawEdges}>
Generate connections between nodes
</Checkbox>
<Button onClick={handleGenerate} disabled={isGenerating}>
{isGenerating ? <Loader2 className="animate-spin" /> : 'Generate Architecture'}
</Button>
</Dialog>API Call:
const handleGenerate = async () => {
setIsGenerating(true);
const response = await fetch('/functions/v1/ai-architect', {
method: 'POST',
body: JSON.stringify({
projectId,
shareToken,
description,
existingNodes: nodes.map(n => ({ id: n.id, label: n.data.label, type: n.data.type })),
existingEdges: edges,
drawEdges,
attachedContext,
}),
});
const { nodes: generatedNodes, edges: generatedEdges } = await response.json();
// Add to canvas
setNodes((nds) => [...nds, ...generatedNodes.map(toReactFlowNode)]);
if (drawEdges) {
setEdges((eds) => [...eds, ...generatedEdges.map(toReactFlowEdge)]);
}
// Save to database
await Promise.all(generatedNodes.map(node =>
supabase.from('canvas_nodes').insert({
id: node.id,
project_id: projectId,
type: node.type,
position: { x: node.x, y: node.y },
data: { label: node.label, subtitle: node.subtitle, description: node.description },
})
));
toast.success(`Generated ${generatedNodes.length} nodes`);
setIsGenerating(false);
};Purpose: Show/hide layers of canvas nodes
Features:
- Toggle layer visibility
- Rename layers
- Create/delete layers
- Assign nodes to layers
Layer Structure:
interface Layer {
id: string;
name: string;
visible: boolean;
color: string;
nodeIds: string[];
}UI:
<div className="layers-panel">
{layers.map(layer => (
<div key={layer.id} className="flex items-center gap-2 p-2 hover:bg-accent">
<Checkbox
checked={layer.visible}
onCheckedChange={(visible) => updateLayer(layer.id, { visible })}
/>
<div
className="w-4 h-4 rounded"
style={{ backgroundColor: layer.color }}
/>
<Input
value={layer.name}
onChange={(e) => updateLayer(layer.id, { name: e.target.value })}
className="flex-1"
/>
<span className="text-xs text-muted-foreground">
{layer.nodeIds.length} nodes
</span>
<Button
size="sm"
variant="ghost"
onClick={() => deleteLayer(layer.id)}
>
<Trash className="w-4 h-4" />
</Button>
</div>
))}
<Button onClick={createLayer}>
<Plus /> New Layer
</Button>
</div>Filter Nodes by Layer:
const visibleNodes = nodes.filter(node => {
const layer = layers.find(l => l.nodeIds.includes(node.id));
return !layer || layer.visible;
});- NodePropertiesPanel: Edit selected node (label, description, color, metadata)
- EdgePropertiesPanel: Edit edge label and type
- CustomEdge: Labeled edge with animation option
- LabelNode: Text-only label node
- NotesNode: Sticky note for annotations
- ZoneNode: Container node for grouping
- Lasso: Lasso selection tool (draw to select multiple)
- LinkSelector: Select requirements/standards to link to node
- ChangeHeatmap: Heatmap overlay showing frequently changed areas
- ChangeLogViewer: History of canvas changes
- IterativeEnhancement: AI-powered iterative refinement
- InfographicDialog: Export canvas as infographic (PNG/SVG)
- BlackboardViewer: Agent blackboard for multi-agent canvas editing
- IterationVisualizer: Visualize agent iteration progress
- AgentPromptEditDialog: Edit agent system prompts
Location: src/components/collaboration/
Components: 5
Real-time collaborative editing and AI-assisted teamwork.
Purpose: AI assistant for collaborative sessions
Features:
- Multi-user chat (shared session)
- AI agent participant (collaboration-agent-orchestrator)
- Blackboard integration (shared memory)
- Context attachment
- Markdown rendering
Props:
interface CollaborationChatProps {
messages: CollaborationMessage[];
blackboard: BlackboardEntry[];
isStreaming: boolean;
streamingContent: string;
onSendMessage: (message: string) => void;
disabled?: boolean;
showBlackboard?: boolean;
attachedCount?: number;
onAttach?: () => void;
onClearContext?: () => void;
}Message Structure:
interface CollaborationMessage {
id: string;
role: 'user' | 'assistant';
content: string;
created_at: string;
metadata?: {
user_name?: string;
thinking?: string;
tools_used?: string[];
};
}Blackboard Display:
{showBlackboard && blackboard.length > 0 && (
<Collapsible>
<CollapsibleTrigger>
<Brain className="w-4 h-4 mr-2" />
Shared Memory ({blackboard.length} entries)
</CollapsibleTrigger>
<CollapsibleContent>
<ScrollArea className="max-h-48">
{blackboard.map(entry => (
<Card key={entry.id} className="mb-2 p-2">
<Badge>{entry.entry_type}</Badge>
<div className="text-sm mt-1">{entry.content}</div>
</Card>
))}
</ScrollArea>
</CollapsibleContent>
</Collapsible>
)}Purpose: Real-time collaborative text editor (for requirements, docs)
Features:
- Operational transformation (OT) for conflict resolution
- User cursors
- Presence indicators
- Version history
- Comment threads
Implementation (conceptual):
import { Editor } from '@tiptap/react';
export function CollaborationEditor({ documentId, projectId, shareToken }) {
const editor = useEditor({
extensions: [
StarterKit,
Collaboration.configure({
document: ydoc, // Yjs shared document
}),
CollaborationCursor.configure({
provider,
user: currentUser,
}),
],
});
// Sync with Supabase
useEffect(() => {
const channel = supabase.channel(`doc-${documentId}`)
.on('broadcast', { event: 'update' }, (payload) => {
// Apply remote changes
applyUpdate(ydoc, payload.update);
})
.subscribe();
return () => channel.unsubscribe();
}, [documentId]);
return (
<div className="editor-container">
<EditorContent editor={editor} />
{/* Active users */}
<div className="flex gap-2">
{activeUsers.map(user => (
<Avatar key={user.id}>
<AvatarImage src={user.avatar_url} />
<AvatarFallback>{user.display_name[0]}</AvatarFallback>
</Avatar>
))}
</div>
</div>
);
}Purpose: Activity timeline for collaborative sessions
Events:
- User joined/left
- Document edited
- Comment added
- AI suggestion accepted
- Canvas node created/modified
UI:
<ScrollArea>
{events.map(event => (
<TimelineItem key={event.id}>
<TimelineIcon type={event.type} />
<TimelineContent>
<div className="flex items-center gap-2">
<Avatar user={event.user} />
<span className="font-semibold">{event.user.display_name}</span>
<span className="text-sm text-muted-foreground">
{event.action}
</span>
</div>
<div className="text-xs text-muted-foreground">
{formatDistanceToNow(event.created_at)} ago
</div>
</TimelineContent>
</TimelineItem>
))}
</ScrollArea>Purpose: Visualize collaboration hotspots
Metrics:
- Most edited requirements
- Most active canvas areas
- Most discussed topics
Heatmap Rendering:
<ResponsiveContainer width="100%" height={400}>
<Treemap
data={hotspots.map(spot => ({
name: spot.element_label,
size: spot.edit_count,
value: spot.edit_count,
}))}
dataKey="size"
aspectRatio={4 / 3}
stroke="#fff"
fill="#8884d8"
>
<Tooltip />
</Treemap>
</ResponsiveContainer>Purpose: Collaborative artifact editing (PDFs, images, docs)
Features:
- Shared annotations
- Comment threads on specific areas
- Version comparison
- Export annotated version
Location: src/components/dashboard/
Components: 10+
Project management dashboard - landing page after login.
Purpose: Grid/list item for project display
Variants:
- Grid (default): Card layout with thumbnail
- List: Compact row layout
Features:
- Status badge (DESIGN/AUDIT/BUILD)
- Last updated timestamp
- Coverage % (if in AUDIT/BUILD)
- Quick actions (edit, delete, clone)
- Click to open project
Status Colors:
const statusConfig = {
DESIGN: {
label: "Design",
className: "bg-status-design/10 text-status-design", // Purple
},
AUDIT: {
label: "Audit",
className: "bg-status-audit/10 text-status-audit", // Blue
},
BUILD: {
label: "Build",
className: "bg-status-build/10 text-status-build", // Green
},
};Grid Variant:
<Card className="group hover:shadow-lg transition-shadow cursor-pointer" onClick={() => onClick(projectId)}>
{/* Splash image */}
<div className="w-full h-32 bg-muted overflow-hidden">
{splashImageUrl ? (
<img src={splashImageUrl} alt={projectName} className="w-full h-full object-cover" />
) : (
<div className="w-full h-full flex items-center justify-center">
<ImageIcon className="w-12 h-12 text-muted-foreground" />
</div>
)}
</div>
<CardHeader>
<div className="flex items-start justify-between">
<CardTitle className="truncate">{projectName}</CardTitle>
<Badge className={statusInfo.className}>{statusInfo.label}</Badge>
</div>
<CardDescription className="line-clamp-2">{description}</CardDescription>
</CardHeader>
<CardContent>
<div className="flex items-center gap-4 text-sm text-muted-foreground">
<div className="flex items-center gap-1">
<Clock className="w-4 h-4" />
{formatDistanceToNow(lastUpdated)} ago
</div>
{coverage !== undefined && (
<div className="flex items-center gap-1">
<TrendingUp className="w-4 h-4" />
{coverage}% coverage
</div>
)}
</div>
</CardContent>
<CardFooter className="opacity-0 group-hover:opacity-100 transition-opacity">
<EditProjectDialog projectId={projectId} onUpdate={onUpdate} />
<CloneProjectDialog projectId={projectId} shareToken={shareToken} />
<DeleteProjectDialog projectId={projectId} onUpdate={onUpdate} />
</CardFooter>
</Card>Purpose: Create new project with AI-powered setup
Enhanced Features:
- Import from GitHub repo
- Import from template
- AI-generated project structure from description
- Tech stack selection
- Budget/scope estimation
Workflow:
<Dialog>
<Tabs>
<TabsList>
<TabsTrigger value="blank">Blank Project</TabsTrigger>
<TabsTrigger value="github">From GitHub</TabsTrigger>
<TabsTrigger value="template">From Template</TabsTrigger>
<TabsTrigger value="ai">AI-Generated</TabsTrigger>
</TabsList>
<TabsContent value="blank">
<Input placeholder="Project name" value={name} onChange={...} />
<Textarea placeholder="Description" value={description} onChange={...} />
<Button onClick={createBlankProject}>Create</Button>
</TabsContent>
<TabsContent value="github">
<Input placeholder="GitHub repo URL" value={repoUrl} onChange={...} />
<Button onClick={importFromGitHub}>Import</Button>
</TabsContent>
<TabsContent value="ai">
<Textarea
placeholder="Describe your application in detail..."
value={aiDescription}
onChange={...}
rows={8}
/>
<Select value={techStack} onValueChange={setTechStack}>
<SelectItem value="react-node">React + Node.js</SelectItem>
<SelectItem value="next">Next.js</SelectItem>
<SelectItem value="django">Django + React</SelectItem>
</Select>
<Button onClick={generateWithAI}>
Generate Project Structure
</Button>
</TabsContent>
</Tabs>
</Dialog>AI Generation:
const generateWithAI = async () => {
// 1. Create project
const { data: project } = await supabase.rpc('create_project_with_token', {
p_name: name,
p_description: aiDescription,
p_token: shareToken,
});
// 2. Generate requirements from AI
const { data: requirements } = await fetch('/functions/v1/decompose-requirements', {
method: 'POST',
body: JSON.stringify({
projectId: project.id,
description: aiDescription,
model: 'gemini-2.5-flash',
}),
}).then(r => r.json());
// 3. Generate architecture
const { nodes, edges } = await fetch('/functions/v1/ai-architect', {
method: 'POST',
body: JSON.stringify({
projectId: project.id,
description: aiDescription,
techStack,
}),
}).then(r => r.json());
toast.success('Project generated! Opening...');
navigate(`/project/${project.id}`);
};Purpose: Edit project metadata
Editable Fields:
- Name
- Description
- Status (DESIGN/AUDIT/BUILD)
- Organization
- Budget
- Scope
- Splash image
Purpose: Delete project with confirmation
Safety:
- Requires typing project name to confirm
- Shows impact (# requirements, # files, etc.)
- Option for soft delete (archive) vs hard delete
Purpose: Duplicate project
Options:
- Clone all data (requirements, canvas, files)
- Clone structure only
- Clone to different organization
Purpose: Recent activity stream
Event Types:
- Project created
- Requirements added
- Audit completed
- Build deployed
- User joined
Purpose: Warn anonymous users about session limits
Message:
<Alert variant="warning">
<AlertTriangle className="h-4 w-4" />
<AlertTitle>Anonymous Session</AlertTitle>
<AlertDescription>
You're using Pronghorn anonymously. Your project will be deleted after 24 hours of inactivity.
<Button onClick={openSignUpDialog}>Create Account to Save</Button>
</AlertDescription>
</Alert>Purpose: Add project via share token
UI:
<Dialog>
<DialogHeader>
<DialogTitle>Add Shared Project</DialogTitle>
</DialogHeader>
<Input
placeholder="Enter share token..."
value={token}
onChange={(e) => setToken(e.target.value)}
/>
<Button onClick={async () => {
const { data } = await supabase.rpc('get_project_with_token', {
p_project_id: null, // Unknown project ID
p_token: token,
});
if (data) {
// Save to user's project list
addToMyProjects(data.id, token);
navigate(`/project/${data.id}?shareToken=${token}`);
}
}}>
Add Project
</Button>
</Dialog>Purpose: Display shared/linked project (read-only badge)
Features:
- "Shared" badge
- Original owner info
- Fork/clone option
- Expiry date (if time-limited share)
All modules use ProjectSelector for context attachment:
<ProjectSelector
projectId={projectId}
shareToken={shareToken}
multiSelect={true}
allowedCategories={["requirements", "standards", "files"]}
onSelect={(selection: ProjectSelectionResult) => {
// selection contains full data for all selected items
setAttachedContext(selection);
}}
/>Standard pattern across all modules:
useEffect(() => {
const channel = supabase.channel(`${moduleName}-${entityId}`)
.on('broadcast', { event: 'refresh' }, (payload) => {
refetch();
})
.on('postgres_changes', {
event: '*',
schema: 'public',
table: tableName,
filter: `project_id=eq.${projectId}`,
}, (payload) => {
handleChange(payload);
})
.subscribe();
return () => channel.unsubscribe();
}, [entityId]);Consistent user feedback:
import { toast } from 'sonner';
toast.success('Audit completed!');
toast.error('Failed to generate architecture');
toast.info('Agent is processing...');
toast.loading('Building knowledge graph...');Skeleton loaders for async operations:
{isLoading ? (
<Skeleton className="h-24 w-full" />
) : (
<DataDisplay data={data} />
)}Documentation Generated: Instance 5 - Feature Components Part 1 Last Updated: 2026-01-06 Maintainer: Pronghorn Documentation Team