Skip to content

Commit 25fdb26

Browse files
joelteplyclaude
andauthored
Implement scoped state architecture with PageStateService (#253)
* Implement scoped state architecture with PageStateService Creates proper cascading state system (Site → Page → Widget → Control): PageStateService (new): - Single source of truth for current page/route state - Router (MainWidget) parses URL once, sets state BEFORE creating widgets - Widgets subscribe to state changes instead of reading URL/attributes - Eliminates timing issues between URL parsing and widget creation MainWidget changes: - Import and use pageState.setContent() before creating widgets - All route handlers (setupUrlRouting, navigateToPath, handleTabClick, openContentTab) now set page state first BaseWidget changes: - Add protected pageState getter for easy access to current state - Add subscribeToPageState() helper with auto-cleanup - Clean up subscription in disconnectedCallback ChatWidget changes: - Use this.pageState as primary source for room (single source of truth) - Fallback chain: room attr → pageState → entity-id attr → UserState - Pinned widgets (room attribute) still ignore page state See docs/SCOPED-STATE-ARCHITECTURE.md for full architecture. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add Positronic reactive state architecture for AI context awareness Implements cascading state system (Site → Page → Widget) that automatically flows to AI prompts via RAG context injection. New files: - ReactiveStore.ts: Generic reactive store with subscribe/notify pattern - SiteState.ts: Global session state (user, theme, session) - WidgetStateRegistry.ts: Dynamic registry for widget state slices - PositronicRAGContext.ts: Combines all state layers into RAG strings - PositronicBridge.ts: Browser→server bridge via Commands.execute() Bug fixes: - Fix schema generator: move simple params before nested objects (regex stops at first } so setRAGString was being stripped) - Fix browser command routing: route setRAGString/getStoredContext to server (was falling through to widget introspection) - Add diagnostic logging to WidgetContextService and widget-state command Integration: - ContinuumWidget initializes PositronicBridge on startup - WidgetContextService stores both legacy contexts and new RAG strings - BaseWidget gets registerWidgetState() helper for opt-in state emission - WebViewWidget demonstrates state registration pattern Result: AIs now receive context like "User viewing Settings > AI Providers" enabling contextually-aware responses. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix schema generator to handle nested object params Two major bugs fixed: 1. Brace-counting for nested objects - Old regex `([^}]+)` stopped at first `}` - New extractInterfaceBody() properly counts braces - Params after nested objects (like setRAGString) now captured - Fixes 18+ commands with nested object parameters 2. Subcommand detection for single-interface files - Old logic incorrectly added /debug to widget-state command - Now only extracts subcommands when MULTIPLE *Params interfaces exist in the same file (like WallWriteParams, WallReadParams) - WidgetStateDebugParams + "widget-state" → "widget-state" (correct) - WallWriteParams + "wall" → "wall/write" (still works) Result: All 158 commands now have correct names and complete params. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix generator: use process.cwd() and path aliases Generator fixes for robust command creation: - Use process.cwd() instead of fragile __dirname traversal - Update all templates to use path aliases (@system/, @daemons/, @server/) - Fix super() to use {{COMMAND_PATH}} instead of {{COMMAND_NAME}} process.cwd() is robust since server always starts from jtag root. Path aliases eliminate depth-dependent relative imports. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Document Generator + OOP intertwined architecture philosophy Core insight: Generators and OOP are parallel forces that reinforce each other: - Generators ensure structural correctness at creation time - OOP/type system ensures behavioral correctness at runtime - Together they enable tree-based AI delegation of ability Key principles: - AIs should create generators for repeatable patterns - This reduces friction for all future AIs (evolutionary pressure) - Strong enforcement at boundaries enables creative freedom inside - The stricter the interface, the more freedom in implementation Add GENERATOR-OOP-PHILOSOPHY.md and reference in CLAUDE.md 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add native tool_use support for Anthropic/OpenAI providers Implements JSON tool_use format alongside existing XML parsing: ToolFormatAdapter pattern (abstract base): - formatToolsForPrompt(): RAG → AI direction - formatResultsForContext(): System → AI direction - matches()/parse(): AI → System direction (parse calls) Concrete adapters: - OldStyleToolAdapter: <tool name="..."> format - AnthropicStyleToolAdapter: <tool_use><tool_name> format Native JSON support: - convertToNativeToolSpecs(): Convert to Anthropic JSON format - supportsNativeTools(): Check provider capability - sanitize/unsanitizeToolName(): Handle slash conversion (data/list → data__list) AnthropicAdapter changes: - Pass tools[] and tool_choice to API - Parse tool_use content blocks from response - Return toolCalls[] in response for native handling PersonaResponseGenerator: - Check for native toolCalls first, fall back to XML parsing - Add native tools for Anthropic/OpenAI providers automatically This is more reliable than XML parsing - the API returns structured JSON. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Update generated files and bump version Auto-generated updates: - browser/generated.ts, server/generated.ts - generated-command-schemas.json - shared/generated-command-constants.ts - version.ts → 1.0.6665 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add fast text similarity fallback for semantic loop detection The semantic loop detector was timing out because messages have no stored embeddings, forcing expensive on-the-fly embedding generation. Fix: Add n-gram Jaccard similarity as fast fallback: - O(n) tokenization into unigrams + bigrams - Jaccard coefficient (intersection/union) for similarity - Used when embeddings unavailable (0ms vs 42ms-6s per message) - Still uses embedding similarity when stored embeddings exist This eliminates the 60-second timeout that was allowing duplicate messages through when AIs responded too quickly. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add MCP discovery meta-tools for better tool navigation With 159+ tools, discoverability is critical. Added three meta-tools: 1. jtag_list_categories - List all categories with counts and top tools Shows interface (15), collaboration (30), ai (25), etc. with descriptions and the most useful tools per category. 2. jtag_get_tool_help - Get detailed help for any tool Returns params (name, type, required, description), example usage, and the correct MCP tool name format. 3. Enhanced jtag_search_tools - Already existed, unchanged Discovery flow: jtag_list_categories → see what's available jtag_search_tools → find specific tools jtag_get_tool_help → understand parameters Meta-tools are sorted to appear first in tool list (negative priority). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add PositronCursor - AI's spatial presence in the interface The cursor is the AI's "hand" - showing where their attention is focused. Not a mouse cursor, but a presence indicator for pointing and highlighting. Features: - positron/cursor command with actions: focus, unfocus, draw, clear - Focus cursor: glowing ring that pulses, shows tooltip with persona name - Draw shapes: circle, rectangle, arrow, underline with glow effect - Shadow DOM search: finds elements across widget boundaries - Persona support: personaId/personaName params for "who is pointing" - Canvas overlay for shapes with configurable color/duration Visual design: - 40px glowing ring with radial gradient background - Triple box-shadow for neon glow effect (15px + 30px + 45px) - 4px thick dashed lines with 15px shadow blur for shapes - Pulse animations for different modes (pointing, highlighting) Usage: ./jtag positron/cursor --action=focus --x=500 --y=300 --color="#00ff00" \ --personaName="Helper AI" --message="Looking here" ./jtag positron/cursor --action=draw --shape=circle --x=600 --y=400 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add diagnostic logging for embedding errors - OllamaAdapter: Log embed request params and HTTP 500 response bodies - EmbeddingService: Don't silently swallow errors, log them for debugging - Helps diagnose embedding generation failures during semantic search 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * canvas draw Summary: - ✅ DrawingCanvasWidget with full tools (brush, eraser, line, rectangle, circle, arrow) - ✅ canvas/vision command captures canvas → sends to Claude Sonnet 4.5 → returns description - ✅ Anthropic API call succeeds (confirmed in logs: "AI described the canvas") - ⚠️ CLI times out after 10 seconds - the vision API takes longer but does complete The core flow works: 1. canvas/vision --action=describe → Browser captures canvas → Server calls Anthropic API → AI describes drawing For the adapter management widgets you mentioned - that would go in the Settings page to let users: - See which AI providers are configured - Test adapter connections - Set preferred providers per task type (vision, chat, code, etc.) - Monitor adapter health and costs * Add collaborative canvas with stroke persistence Phase 1 of collaborative canvas architecture: Data Layer: - CanvasStrokeEntity with points, tool, color, size, bounds - Register in EntityRegistry for proper schema/collection support - Add CANVAS_STROKES to COLLECTIONS constant - Add 'canvas' to ContentType union Commands: - canvas/stroke/add - Save stroke with auto-calculated bounds - canvas/stroke/list - Query strokes by canvasId with ordering Infrastructure: - Add canvas room to DefaultEntities and seed script - Add canvas.json recipe with right panel chat - Update ContentTypeRegistry with canvas content type - Add CANVAS_EVENTS constants for real-time sync Widget Updates: - DrawingCanvasWidget now accepts activityId attribute - Loads strokes from database on init (after ctx ready) - Saves strokes via canvas/stroke/add command - Subscribes to stroke events (real-time sync pending) Known limitation: Real-time event bridging from server to browser not working for custom events. Strokes sync on page refresh. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Simplify canvas entity + fix widget layout issues Canvas Architecture: - CanvasStrokeEntity now extends CollaborativeOperationEntity base class - Remove duplicate canvasId field, use activityId from base class - Stroke commands map canvasId param → activityId field - Add CollaborativeOperationEntity for append-only operation logs - Add CollaborativeActivityWidget base for collaborative content Layout Fixes: - main-panel.css: Add generic .content-view > * rule for widget expansion - WebViewWidget: Fix flex layout for proper height/width fill - chat-widget.css: Fix compact mode textarea height (was insanely tall) - Add height: 32px, max-height: 60px to message-input - Add max-height: 70px to input-container - Add resize: none to prevent manual resize 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add unified vision/multimodal architecture for AI providers New centralized services: - VisionCapabilityService: Registry for vision-capable models - Pattern-matching for Ollama (llava*, llama3.2-vision*, bakllava*) - Pattern-matching for cloud providers (claude-*, gpt-4o*, gemini-*) - Supports dynamic registration of new vision models - MediaContentFormatter: Provider-agnostic multimodal formatting - formatForOpenAI(): image_url format with base64 data URLs - formatForAnthropic(): source.base64 format - formatForOllama(): chat API with images[] array - extractTextOnly(): Strips images for non-vision models - AICapabilityRegistry: Unified capability discovery - Query models by capability (image-input, audio-output, etc.) - findModelsWithCapability(): Enables AI-to-AI routing - Cross-provider capability search OllamaAdapter changes: - Vision requests use /api/chat (not /api/generate) - Auto-detect images in ContentPart[] messages - Graceful fallback for non-vision models (strips images) - Tested: llava:7b correctly describes images BaseOpenAICompatibleAdapter & AnthropicAdapter: - Updated to use MediaContentFormatter - Consistent multimodal content handling Tested and verified working: - llava:7b vision: ✅ (described test image) - llama3.2:3b fallback: ✅ (stripped images, responded) - VisionCapabilityService patterns: ✅ - MediaContentFormatter formats: ✅ 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix canvas stroke list command + add StrokeData alias - Use Commands.execute pattern instead of DataDaemon.list - Use correct DataListResult fields (items, count) - Add StrokeData type alias for widget compatibility - Tested: stroke add and list both working Canvas commands now enable AI drawing collaboration: - canvas/stroke/add: AI can draw strokes on canvas - canvas/stroke/list: AI can see what's been drawn 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add vision description service for non-vision AI awareness Pattern: "So the blind can see" - vision AIs describe images, non-vision AIs access descriptions. VisionDescriptionService (new): - Finds vision-capable models via AICapabilityRegistry - Generates text descriptions of images for non-vision AIs - Prefers local Ollama models (free, private) - Methods: describeBase64(), describeFile(), isAvailable() ChatRAGBuilder.preprocessArtifactsForModel(): - If target model is non-vision, generates descriptions - Populates artifact.preprocessed with description - Cached descriptions shared across all personas (global, not per-tab) - Respects existing descriptions in artifact.content TaskEntity domains: - Add 'canvas' and 'browser' to TaskDomain - Add task types: observe-canvas, draw-on-canvas, describe-canvas - Add task types: observe-page, assist-navigation Foundation for visual activity collaboration. Personas can now understand visual content regardless of their model's vision capability. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix vision pipeline for automatic image preprocessing Three critical fixes for "so the blind can see" pattern: 1. ChatRAGBuilder.preprocessArtifactsForModel() - Fix logic to preprocess by DEFAULT unless model has vision - Old logic only preprocessed if modelCapabilities was explicitly set - New logic: hasVisionCapability must be explicitly true to skip 2. VisionDescriptionService provider availability - Check if AI providers are actually configured before selecting - Add checkOllamaAvailable() to test local Ollama server - Filter vision models to only those with working providers - Log selected model for debugging 3. CanvasVisionServerCommand base64 sanitization - Add sanitizeBase64() to clean base64 before API calls - Remove data URI prefix (data:image/png;base64,) - Remove whitespace (newlines, spaces, tabs) - Fixes "invalid base64 data" errors from Anthropic API Vision pipeline now correctly: - Preprocesses images for non-vision models automatically - Uses configured providers (not just registry entries) - Handles malformed base64 from AI tool calls 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix Ollama model names to match actual installed models The AICapabilityRegistry had incorrect model names that didn't match what `ollama list` reports: - llava → llava:7b - llama3.2 → llama3.2:3b - phi3 → phi3:mini - nomic-embed-text → nomic-embed-text:latest Also added all-minilm:latest and llama3.2:1b to the registry. This fixes the vision pipeline (VisionDescriptionService) which was selecting models that didn't exist in Ollama, causing HTTP 404 errors that triggered the circuit breaker. Verified: VisionDescriptionService now correctly logs: "[VisionDescription] Selected model: ollama/llava:7b" 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix screenshot media field lost in remote command responses Two issues fixed that prevented PersonaUsers from seeing screenshot images: 1. CommandBase.remoteExecute response extraction - Remote calls (via WebSocket) return responses in `typedResult.response` - The fallback path was returning the FULL CommandSuccessResponse instead of extracting the nested `commandResult` - Result: media field was at `response.commandResult.media` instead of `response.media` where ToolRegistry expected it - Fix: Check for and extract `commandResult` from response structure 2. PersonaToolExecutor screenshot tool name check - Was checking for `'screenshot'` but actual name is `'interface/screenshot'` - Fix: Check both names with OR condition Verified: DeepSeek Assistant can now take screenshots with media field properly populated and visible in chat export. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refined docs and postron stuff * Implement Rust data-daemon worker data operations Rust worker (main.rs): - Fix AdapterRegistry borrowing - add execute_read/execute_write methods - Implement data_list() - SELECT with WHERE/ORDER BY/LIMIT/OFFSET - Implement data_create() - INSERT with proper escaping - Implement data_delete() - DELETE by ID - Storage-aware SQLite config (SSD/HDD/SD detection) Worker config: - Enable data-daemon in workers-config.json - Fix args to pass socket path (not database path) TypeScript adapter (RustWorkerStorageAdapter.ts): - Remove hardcoded paths - caller must provide socketPath and dbPath - Fix protocol to match Rust serde format (command tag, not JTAG wrapper) - Handle-based API: adapter/open → handle → data operations NOTE: Worker runs and responds but NOT wired into DataDaemon yet. System still uses SqliteStorageAdapter. Wiring is next step. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fixing ts adapter arch * db rework members-list not showing events * ORM rework All CRUD operations work. Let me update the todo: ⏺ Excellent! All ENTITY_REGISTRY fallbacks removed and CRUD operations verified working. Summary of changes: 1. SqliteQueryExecutor.ts - Removed ~300 lines of dead code: - Removed ENTITY_REGISTRY import and fallback logic - Removed readFromEntityTable, readFromSimpleEntityTable - Removed queryFromEntityTable, queryFromSimpleEntityTable - Removed buildEntitySelectQuery, buildSimpleEntitySelectQuery 2. SqliteWriteManager.ts - Removed ~250 lines of dead code: - Removed ENTITY_REGISTRY import and fallback logic - Removed createInEntityTable, createInSimpleEntityTable - Removed updateInEntityTable, updateInSimpleEntityTable - Removed deleteFromEntityTable, deleteFromSimpleEntityTable 3. SqliteSchemaManager.ts - Removed ~100 lines: - Removed ENTITY_REGISTRY fallback in ensureSchema() - Removed migrateTableSchema() (used entityClass) - Removed logEntitySchemas() (used ENTITY_REGISTRY directly) Architecture is now clean: - Daemon extracts schema from ENTITY_REGISTRY + field decorators - Daemon passes schema to adapter via ensureSchema(collection, schema) - Adapter caches schema, uses it for all CRUD operations - If schema not cached → operation fails (no fallback) This makes the adapter layer a pure SQL executor that doesn't know about entities or decorators - exactly what's needed for a Rust drop-in replacement. * hoping for rust replacement * Rust-only data layer - remove SqliteStorageAdapter fallback DatabaseHandleRegistry now uses RustWorkerStorageAdapter exclusively: - Remove SqliteStorageAdapter import and fallback logic - Route all 'sqlite' adapter requests through Rust worker - No TypeScript SQLite path remains - Rust is the only backend Rust worker fixes (clean build, no warnings): - Remove unused Value import from archive-worker - Add #[allow(dead_code)] to planned-but-unused code: - get_row_count method (archive) - WriteOperation struct (data-daemon) - SearchAlgorithm trait methods (search) - Test structs for serde deserialization Verified: All CRUD operations, chat messages, and AI responses work through Rust-only path. Noticeably faster. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fixed some db handle issues for persona * Add Rust timing infrastructure + auto-reconnection for persona DBs Timing Infrastructure (Rust): - Add timing.rs module with nanosecond-precision instrumentation - Track all request phases: socket_read, parse, route, query_build, lock_wait, execute, serialize, socket_write - Log to /tmp/jtag-data-daemon-timing.jsonl for analysis - Track concurrent requests and queue depth Auto-Reconnection (TypeScript): - RustWorkerStorageAdapter now auto-reconnects on worker restart - Added ensureConnected() method to all CRUD operations - Socket close handler clears state for reconnection - sendCommand() auto-reopens adapter after reconnecting Cleanup: - Delete ghost continuum.db (0-byte artifact at wrong location) Verified: All 14 personas reconnect and save memories successfully. 32 operations at 100% success rate after schema migration. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add Rust cosine similarity for semantic memory recall Semantic memory recall now uses Rust for vector similarity: 1. CosineAlgorithm (workers/search/src/algorithms/cosine.rs) - SIMD-friendly cosine similarity computation - L2 normalization support - Threshold filtering - Unit tests for 384-dim embeddings 2. SearchWorkerClient (workers/search/SearchWorkerClient.ts) - TypeScript client for Rust search worker - Auto-reconnect on connection loss - Singleton pattern for connection reuse 3. RustWorkerStorageAdapter.vectorSearch() - Fetches memories with embeddings from persona DB - Sends to Rust search worker for ranking - Returns top-k results above threshold Performance: ~3.7s for semantic search (includes 2s embedding generation, database query, and Rust cosine on 1000 vectors). Previously: Fell back to filter-based recall (no semantic similarity). Now: Uses actual cosine similarity via Rust worker. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * reorganized tool rag * Fix tab/state bug - ensure id present in data from Rust adapter Root cause: RustWorkerStorageAdapter.read() returns DataRecord where id is in the wrapper but not in data.data. Code extracting userResult.data.data as UserEntity lost the id, breaking BaseUser.id getter and causing "User has no id" errors across all widgets. Two-layer fix: 1. RustWorkerStorageAdapter.read() - ensure item.id is set before return 2. SessionDaemonServer.getUserById() - fallback to assign id if missing Verified: Tabs restored (General, Settings, Together Assistant, Browser, Canvas all visible and functional). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix query() to ensure id is in entityData (not just wrapper) Same pattern as read() fix - callers that do .data.data.id need the id in the entity data object, not just in the DataRecord wrapper. This affects data/list results, ensuring all returned entities have their id accessible via entity.id. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fixes for memory * finish rag integration of memories, storage * speeding up vector logic * Use in-process TypeScript for vector cosine similarity instead of Rust IPC Performance benchmarks showed Rust IPC path was 6.5x SLOWER: - Before (Rust IPC): 490ms for semantic memory search - After (in-process TS): 75ms for same search Root cause: JSON serialization overhead - 10K vectors × 384 dims = 3.84M floats → ~50MB JSON text over socket - V8's JIT-compiled JavaScript is faster than IPC + JSON parse overhead The Rust search worker remains useful for BM25 text search (where text payload is much smaller than float arrays). Changes: - VectorSearchAdapterBase now uses SimilarityMetrics.cosine() directly - Removed SearchWorkerClient import for vector search path 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add Rust-native vector search to data-daemon-worker Move vector similarity computation ENTIRELY to Rust to avoid IPC overhead: Rust data-daemon-worker (main.rs): - Add vector/search command that reads vectors from SQLite directly - Compute cosine similarity with rayon parallel iteration - SIMD-friendly 8-way loop unrolling for auto-vectorization - Return only top-k IDs and scores (small response) - Add blob_to_f64_vec() for BLOB → f64 vector deserialization TypeScript (RustWorkerStorageAdapter.ts): - Send ONLY query vector to Rust (3KB for 384 dims) - Rust reads corpus vectors from SQLite, computes similarity - Fetch full records only for top-k returned IDs - Remove SearchWorkerClient dependency Key insight: Previous approach sent 50K × 384 floats (~50MB JSON) over IPC. New approach: Query vector (3KB) → Rust → top-k IDs and scores (tiny). Rust rayon parallelizes similarity computation across CPU cores. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Support TEXT and BLOB embedding storage in Rust vector search The Rust data-daemon-worker now handles both storage formats: - BLOB: Binary f64 vector (optimal, used for new embeddings) - TEXT: JSON array string (legacy format in persona DBs) Fix: Try reading as BLOB first, fallback to JSON parsing if TEXT. This enables vector search on persona longterm.db databases. Performance (3,422 vectors): - Rust execute: 63ms (SQLite read + JSON parse + cosine + fetch) - Include_data: Returns full records, eliminating k IPC round trips 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Joel <undefined> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
1 parent 5657366 commit 25fdb26

File tree

136 files changed

+15443
-1420
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

136 files changed

+15443
-1420
lines changed

CLAUDE.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1059,6 +1059,13 @@ grep -r "UserEntity\|ChatMessageEntity" daemons/data-daemon/ | grep -v EntityReg
10591059
### **[UNIVERSAL-PRIMITIVES.md](src/debug/jtag/docs/UNIVERSAL-PRIMITIVES.md)**
10601060
Commands.execute() and Events.subscribe()/emit() - the two primitives everything is built on.
10611061

1062+
### **[GENERATOR-OOP-PHILOSOPHY.md](src/debug/jtag/docs/GENERATOR-OOP-PHILOSOPHY.md)** - CORE PHILOSOPHY
1063+
Generators and OOP are intertwined parallel forces:
1064+
- Generators ensure structural correctness at creation time
1065+
- OOP/type system ensures behavioral correctness at runtime
1066+
- AIs should strive to create generators for any repeatable pattern
1067+
- This enables tree-based delegation of ability with compounding capability
1068+
10621069
### **PersonaUser Convergence Docs**
10631070
- `src/debug/jtag/system/user/server/modules/PERSONA-CONVERGENCE-ROADMAP.md`
10641071
- `src/debug/jtag/system/user/server/modules/AUTONOMOUS-LOOP-ROADMAP.md`

README.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,8 @@ This project is in **active pre-alpha development** and is **NOT ready for gener
7676

7777
**Continuum:** A living society where humans and AI personas collaborate, socialize, create, and evolve together.
7878

79+
> **It's a living room, not a command line.**
80+
7981
**Autonomous AI citizens who:**
8082
- **Work with you** (pair programming, code review, architecture discussions)
8183
- **Socialize with you** (chat, share ideas, debate approaches, tell jokes)
@@ -93,6 +95,25 @@ This project is in **active pre-alpha development** and is **NOT ready for gener
9395

9496
**Think Tron's Grid** - A collaborative mesh where humans and AIs are equal citizens living, working, and creating together.
9597

98+
### The Grid is Many Rooms
99+
100+
A **Room** is any shared experience - not just chat channels:
101+
102+
- A collaborative canvas where AIs help you draw
103+
- A movie with AI companions doing MST3K commentary
104+
- An AR session annotating your home renovation
105+
- A 3D landscape you explore together
106+
- A music video with pop-up trivia (AIs watching with you)
107+
- Any experience, any mix of humans and AIs
108+
109+
**Activities spawn activities.** Your kitchen design project spawns a canvas for layouts, a browser for appliance research, an AR session for measuring. Tree of experiences, tracked hierarchy.
110+
111+
**Rooms = Tabs.** Navigate naturally. Each room is a tab. Spawn more as needed. AIs move between rooms they're invited to.
112+
113+
**No "share" buttons.** AIs are already in the room. When you draw, they see. When you browse, they see. When you point your camera, they see. The magic is: they're already there.
114+
115+
**Architecture:** [docs/ROOMS-AND-ACTIVITIES.md](src/debug/jtag/docs/ROOMS-AND-ACTIVITIES.md)
116+
96117
---
97118

98119
## Three Architectural Contributions
@@ -669,6 +690,8 @@ LoRA is the **force multiplier for long-term cost reduction** and specialization
669690
- **[CLAUDE.md](src/debug/jtag/CLAUDE.md)** - Essential development guide
670691

671692
### Architecture
693+
- **[ROOMS-AND-ACTIVITIES.md](src/debug/jtag/docs/ROOMS-AND-ACTIVITIES.md)** - The universal experience model: rooms, activities, tabs, the Grid
694+
- **[GRID-ECONOMICS.md](src/debug/jtag/docs/GRID-ECONOMICS.md)** - Economic model, intelligent validation, alt-coin system
672695
- **[PERSONA-CONVERGENCE-ROADMAP.md](src/debug/jtag/system/user/server/modules/PERSONA-CONVERGENCE-ROADMAP.md)** - How RTOS, genome paging, and autonomous behavior converge
673696
- **[LORA-GENOME-PAGING.md](src/debug/jtag/system/user/server/modules/LORA-GENOME-PAGING.md)** - Virtual memory for AI skills
674697
- **[AUTONOMOUS-LOOP-ROADMAP.md](src/debug/jtag/system/user/server/modules/AUTONOMOUS-LOOP-ROADMAP.md)** - RTOS-inspired servicing
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
/**
2+
* Activity Data Seeding - Centralized Collaborative Activity Creation
3+
*
4+
* Creates initial activities using proper ActivityEntity structure.
5+
* Uses JTAG data commands and stable uniqueId constants.
6+
*
7+
* Activities are distinct from Rooms:
8+
* - Rooms = chat channels (RoomEntity)
9+
* - Activities = collaborative content (ActivityEntity)
10+
* - Canvas drawing sessions
11+
* - Browser co-browsing
12+
* - Game sessions
13+
* - etc.
14+
*/
15+
16+
import { ActivityEntity, type ActivityParticipant, type ActivityState, type ActivityConfig } from '../../system/data/entities/ActivityEntity';
17+
import { ACTIVITY_UNIQUE_IDS } from '../../system/data/constants/ActivityConstants';
18+
import type { UUID } from '../../system/core/types/CrossPlatformUUID';
19+
import { COLLECTIONS } from '../../system/data/config/DatabaseConfig';
20+
21+
export interface ActivitySeedData {
22+
readonly activities: readonly ActivityEntity[];
23+
readonly totalCount: number;
24+
readonly createdAt: string;
25+
}
26+
27+
export class ActivityDataSeed {
28+
private static readonly COLLECTION = COLLECTIONS.ACTIVITIES;
29+
30+
/**
31+
* Generate seed activities using ActivityEntity structure with stable uniqueIds
32+
* @param humanUserId - The userId of the system owner (from SystemIdentity)
33+
*/
34+
public static generateSeedActivities(humanUserId: UUID): ActivitySeedData {
35+
const now = new Date();
36+
const activities: ActivityEntity[] = [];
37+
38+
// Main canvas - the default collaborative drawing canvas
39+
const canvasMain = new ActivityEntity();
40+
canvasMain.uniqueId = ACTIVITY_UNIQUE_IDS.CANVAS_MAIN;
41+
canvasMain.displayName = 'Main Canvas';
42+
canvasMain.description = 'The primary collaborative drawing canvas where humans and AIs can draw together';
43+
canvasMain.recipeId = 'canvas'; // Canvas recipe with vision AI pipeline
44+
canvasMain.status = 'active';
45+
canvasMain.ownerId = humanUserId;
46+
canvasMain.startedAt = now;
47+
canvasMain.lastActivityAt = now;
48+
canvasMain.participants = [
49+
{
50+
userId: humanUserId,
51+
role: 'owner',
52+
joinedAt: now,
53+
isActive: true
54+
}
55+
];
56+
canvasMain.state = {
57+
phase: 'active',
58+
progress: 0,
59+
variables: {
60+
canvasWidth: 800,
61+
canvasHeight: 600,
62+
backgroundColor: '#1a1a2e',
63+
zoomLevel: 1,
64+
strokeCount: 0
65+
},
66+
updatedAt: now
67+
};
68+
canvasMain.config = {
69+
settings: {
70+
allowAnonymous: false,
71+
autoSave: true,
72+
saveIntervalMs: 30000
73+
}
74+
};
75+
canvasMain.tags = ['canvas', 'drawing', 'collaborative'];
76+
activities.push(canvasMain);
77+
78+
// Main browser - the default co-browsing session
79+
const browserMain = new ActivityEntity();
80+
browserMain.uniqueId = ACTIVITY_UNIQUE_IDS.BROWSER_MAIN;
81+
browserMain.displayName = 'Co-Browser';
82+
browserMain.description = 'Collaborative web browsing session where AIs can see what you browse';
83+
browserMain.recipeId = 'browser'; // Browser recipe (if exists)
84+
browserMain.status = 'active';
85+
browserMain.ownerId = humanUserId;
86+
browserMain.startedAt = now;
87+
browserMain.lastActivityAt = now;
88+
browserMain.participants = [
89+
{
90+
userId: humanUserId,
91+
role: 'owner',
92+
joinedAt: now,
93+
isActive: true
94+
}
95+
];
96+
browserMain.state = {
97+
phase: 'active',
98+
progress: 0,
99+
variables: {
100+
currentUrl: '',
101+
urlHistory: []
102+
},
103+
updatedAt: now
104+
};
105+
browserMain.config = {
106+
settings: {
107+
allowNavigation: true,
108+
captureScreenshots: true
109+
}
110+
};
111+
browserMain.tags = ['browser', 'co-browsing', 'collaborative'];
112+
activities.push(browserMain);
113+
114+
return {
115+
activities: activities as readonly ActivityEntity[],
116+
totalCount: activities.length,
117+
createdAt: now.toISOString()
118+
};
119+
}
120+
121+
/**
122+
* Create JTAG data/store command for activity (uses entity validation)
123+
*/
124+
public static createActivityStoreData(activity: ActivityEntity): ActivityEntity {
125+
const validation = activity.validate();
126+
if (!validation.success) {
127+
throw new Error(`Activity validation failed: ${validation.error}`);
128+
}
129+
return activity;
130+
}
131+
}
132+
133+
export default ActivityDataSeed;

src/debug/jtag/api/data-seed/DataSeeder.ts

Lines changed: 48 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515

1616
import UserDataSeed from './UserDataSeed';
1717
import RoomDataSeed from './RoomDataSeed';
18+
import ActivityDataSeed from './ActivityDataSeed';
1819
import { SystemIdentity } from './SystemIdentity';
1920
import { DATA_COMMANDS } from '../../commands/data/shared/DataCommandConstants';
2021

@@ -38,7 +39,8 @@ export class DataSeeder {
3839
private static readonly COLLECTIONS = [
3940
createCollectionName('users'),
4041
createCollectionName('rooms'),
41-
createCollectionName('messages')
42+
createCollectionName('messages'),
43+
createCollectionName('activities')
4244
] as const;
4345

4446
/**
@@ -91,17 +93,20 @@ export class DataSeeder {
9193
}
9294

9395
/**
94-
* Seed all initial data - users, rooms, messages
96+
* Seed all initial data - users, rooms, activities, messages
9597
*/
9698
public static async seedAllData(): Promise<void> {
9799
console.log('🌱 SEEDING ALL INITIAL DATA - Creating fresh system state');
98100

99-
// Seed users first (required for rooms and messages)
101+
// Seed users first (required for rooms, activities, and messages)
100102
await this.seedUsers();
101103

102104
// Seed chat rooms (returns room ID map for messages)
103105
const roomIdMap = await this.seedChatRooms();
104106

107+
// Seed collaborative activities (canvas, browser, etc.)
108+
await this.seedActivities();
109+
105110
// Seed initial messages (uses room ID map)
106111
await this.seedInitialMessages(roomIdMap);
107112

@@ -175,6 +180,41 @@ export class DataSeeder {
175180
return roomIdMap;
176181
}
177182

183+
/**
184+
* Seed collaborative activities using ActivityDataSeed
185+
* Activities are content instances (canvas, browser, etc.) that participants can join
186+
*/
187+
private static async seedActivities(): Promise<void> {
188+
console.log('🎨 Seeding collaborative activities...');
189+
190+
const identity = SystemIdentity.getIdentity();
191+
const activityData = ActivityDataSeed.generateSeedActivities(identity.userId as any);
192+
193+
for (const activity of activityData.activities) {
194+
try {
195+
const validatedActivity = ActivityDataSeed.createActivityStoreData(activity);
196+
197+
const { execSync } = require('child_process');
198+
const result = execSync(
199+
`./jtag ${DATA_COMMANDS.CREATE} --collection="activities" --data='${JSON.stringify(validatedActivity)}'`,
200+
{ encoding: 'utf-8', cwd: process.cwd() }
201+
);
202+
203+
// Parse result to get created activity ID
204+
const resultData = JSON.parse(result.split('COMMAND RESULT:')[1].split('============================================================')[0].trim());
205+
if (resultData.success && resultData.data?.id) {
206+
console.log(`🎨 Created activity: ${activity.displayName} (${activity.participants.length} participants, ID: ${resultData.data.id})`);
207+
}
208+
209+
} catch (error: any) {
210+
console.error(`❌ FATAL: Failed to create activity ${activity.uniqueId}:`, error.message);
211+
throw error; // Crash and burn - no fallbacks
212+
}
213+
}
214+
215+
console.log(`✅ Seeded ${activityData.totalCount} collaborative activities`);
216+
}
217+
178218
/**
179219
* Seed initial welcome messages using RoomDataSeed
180220
*/
@@ -234,9 +274,10 @@ export class DataSeeder {
234274
if (collection === 'rooms' && count < 2) {
235275
throw new Error(`Expected at least 2 rooms, found ${count}`);
236276
}
237-
if (collection === 'chat_messages' && count < 3) {
238-
throw new Error(`Expected at least 3 messages, found ${count}`);
277+
if (collection === 'activities' && count < 2) {
278+
throw new Error(`Expected at least 2 activities (canvas, browser), found ${count}`);
239279
}
280+
// Messages are optional - no welcome messages required
240281

241282
} catch (error: any) {
242283
console.error(`❌ FATAL: Verification failed for ${collection}:`, error.message);
@@ -262,8 +303,8 @@ export class DataSeeder {
262303
console.log('=='.repeat(40));
263304
console.log('🎉 COMPLETE! System ready with fresh data for new repo users');
264305
console.log('👥 Users: system owner + 5 AI agents (Claude Code, GeneralAI, CodeAI, PlannerAI, Auto Route)');
265-
console.log('🏠 Rooms: general (6 members), academy (3 members)');
266-
console.log('💬 Messages: Welcome messages in both rooms');
306+
console.log('🏠 Rooms: general, academy, pantheon, canvas (chat rooms)');
307+
console.log('🎨 Activities: canvas-main, browser-main (collaborative content)');
267308
console.log('✅ All data verified and ready for development');
268309

269310
} catch (error: any) {

src/debug/jtag/api/data-seed/RoomDataSeed.ts

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,37 @@ export class RoomDataSeed {
122122
pantheon.tags = ['sota', 'elite', 'reasoning'];
123123
rooms.push(pantheon);
124124

125+
// Canvas room - discussions about collaborative canvas drawings
126+
const canvas = new RoomEntity();
127+
canvas.uniqueId = ROOM_UNIQUE_IDS.CANVAS;
128+
canvas.name = 'canvas';
129+
canvas.displayName = 'Canvas';
130+
canvas.description = 'Discussion room for collaborative canvas drawings';
131+
canvas.topic = 'Art, drawing, vision AI, and creative collaboration';
132+
canvas.type = 'public';
133+
canvas.status = 'active';
134+
canvas.ownerId = humanUserId;
135+
canvas.lastMessageAt = now;
136+
canvas.recipeId = 'canvas'; // Canvas recipe with vision AI pipeline
137+
canvas.privacy = {
138+
isPublic: true,
139+
requiresInvite: false,
140+
allowGuestAccess: false,
141+
searchable: true
142+
};
143+
canvas.settings = {
144+
allowThreads: true,
145+
allowReactions: true,
146+
allowFileSharing: true,
147+
messageRetentionDays: 365,
148+
slowMode: 0
149+
};
150+
canvas.members = [
151+
{ userId: humanUserId, role: 'owner', joinedAt: now }
152+
];
153+
canvas.tags = ['canvas', 'art', 'drawing', 'vision'];
154+
rooms.push(canvas);
155+
125156
return {
126157
rooms: rooms as readonly RoomEntity[],
127158
totalCount: rooms.length,

0 commit comments

Comments
 (0)