This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
A.R.C Studio — Content optimization platform combining neural response prediction (TRIBE v2), multi-agent social simulation (MiroFish-Offline), and LLM-driven iterative optimization (Claude) into a single feedback loop. Phase 1 POC: single-user, local machine, non-commercial.
- Hardware: Single RTX 5070 Ti GPU shared between TRIBE v2 and Ollama embeddings
- API: Claude API rate limits — Haiku batched, Opus sequential (4-8 calls/campaign)
- Dependency: TRIBE v2 requires HuggingFace LLaMA 3.2-3B gated model approval
- Dependency: MiroFish-Offline is a Git submodule — minimal modifications to enable upstream merges
- Performance: Full campaign (40 agents, 4 iterations) must complete in <= 20 minutes
- PyTorch: Pinned to 2.5.1-2.6.x (no native sm_120 support). See
docs/pytorch_upgrade_path.mdfor upgrade path to 2.8+. - Scope: Phase 1 POC only — no auth, no HTTPS, no multi-user
docker compose up -d # Start all Docker services
docker compose down # Stop all
docker compose up -d litellm # Restart just LiteLLM (e.g., after API key refresh)bash tribe_scorer/start.sh # Start scorer (loads model ~60s, then seeds baseline)# IMPORTANT: Run from project root, not from inside orchestrator/
python -m uvicorn orchestrator.api:create_app --factory --port 8000cd ui && npm run dev # Vite dev server
cd ui && npm run build # Production build (tsc + vite)
cd ui && npm run lint # ESLintpython -m orchestrator.cli \
--seed-content "Your content..." \
--prediction-question "How will the audience react?" \
--demographic tech_professionals \
--max-iterations 2 \
--output results/my_campaign.json# All orchestrator tests (194 tests, pytest with asyncio_mode=auto)
pytest
# Single test file
pytest orchestrator/tests/test_campaign_runner.py
# Single test
pytest orchestrator/tests/test_campaign_runner.py::test_function_name -v
# With output
pytest -s orchestrator/tests/test_composite_scorer.pypyproject.toml sets testpaths = ["orchestrator/tests"] and asyncio_mode = "auto".
# Update .env with current Claude OAuth token
bash scripts/refresh-env.sh
# Also restart LiteLLM container to pick up new key
bash scripts/refresh-env.sh --restartThe orchestrator also auto-refreshes the LiteLLM API key on startup via _refresh_litellm_api_key() in orchestrator/api/__init__.py.
React UI (5173) → Orchestrator FastAPI (8000) → TRIBE v2 (8001) + MiroFish (5001)
↕ Claude API (Haiku + Opus)
MiroFish (Docker) → LiteLLM (4000) → Anthropic API
→ Neo4j (7687)
→ Ollama (11434, host)
- Variant generation — Claude Haiku generates N content variants from seed + feedback
- TRIBE v2 scoring — Each variant scored on 7 neural dimensions (text → TTS → WhisperX → LLaMA 3.2-3B → brain-encoding → ROI extraction → normalization)
- MiroFish simulation — Multi-agent social simulation with Claude Haiku agents (create ontology → spawn agents → simulate → extract metrics)
- Composite scoring — 7 formulas blend TRIBE neural scores + MiroFish social metrics
- Cross-system analysis — Claude Opus analyzes why neural patterns led to social outcomes
- Optimization loop — Threshold checking, convergence detection, iteration feedback
- Report generation — 4-layer report (verdict, scorecard, mass psychology general + technical)
Graceful degradation: When TRIBE or MiroFish is unavailable, the pipeline continues with partial data. Composite scores that need the missing system return None. The system does a pre-flight health check at iteration start.
Two Python runtimes: The orchestrator runs on system Python 3.13+. TRIBE v2 requires Python 3.11 specifically (pyannote.audio dependency) and runs in its own venv at tribe_scorer/.venv/.
OAuth credential flow: The Claude API key can come from ANTHROPIC_API_KEY env var, or falls back to reading the OAuth token from ~/.claude/.credentials.json. On 401 errors, the client refreshes credentials automatically. LiteLLM needs the key in .env and must be restarted when the token rotates.
Configuration: Both services use Pydantic BaseSettings loading from a shared .env file at repo root. Orchestrator config: orchestrator/config.py. TRIBE config: tribe_scorer/config.py.
TRIBE inference serialization: The TRIBE v2 model and its exca cache are not thread-safe. A threading.Lock (_inference_lock in tribe_scorer/main.py) ensures only one inference runs at a time.
Windows MAX_PATH workaround: TRIBE v2's exca cache creates deeply nested paths. On Windows, the cache folder defaults to C:\tc to stay under the 260-char limit.
Prompt templates: All Claude prompt templates live in orchestrator/prompts/. Demographic-specific cognitive weights are in demographic_profiles.py.
- Orchestrator package layout:
api/(FastAPI routes, schemas),clients/(HTTP clients for TRIBE + MiroFish + Claude),engine/(pipeline logic),storage/(SQLite via aiosqlite),prompts/(Claude prompt templates) - TRIBE scorer layout:
scoring/(model_loader, text_scorer, roi_extractor, normalizer),vendor/tribev2/(Git submodule, vendored TRIBE v2 source) - Tests: All in
orchestrator/tests/, use pytest-asyncio withasyncio_mode=auto. Shared fixtures inconftest.py. External services are mocked —mock_claude_clientfixture providesAsyncMockwithcall_haiku_json/call_opus_json. - API routes: All under
/apiprefix. Routers split by domain:campaigns.py,health.py,progress.py,reports.py,agents.py - SSE streaming: Campaign progress uses Server-Sent Events via
sse-starlette. Progress queues stored onapp.state.progress_queues. - Frontend: React 19 + Vite + TypeScript + Tailwind CSS v4 + shadcn/ui. Data fetching via TanStack React Query. Hooks in
src/hooks/, API types insrc/api/.
| Layer | Technology |
|---|---|
| Frontend | React 19, Vite 8, TypeScript 5.9, Tailwind CSS 4, shadcn/ui, TanStack React Query, Recharts, React Router 7 |
| Orchestrator API | FastAPI, Pydantic v2, uvicorn, httpx, aiosqlite, anthropic SDK |
| TRIBE v2 Scorer | FastAPI, PyTorch, Transformers (LLaMA 3.2-3B), WhisperX, gTTS, spaCy |
| MiroFish | Flask (Docker), Neo4j 5.18, CAMEL-AI/OASIS agents |
| LLM Proxy | LiteLLM (Docker, OpenAI→Anthropic translation) |
| Embeddings | Ollama (nomic-embed-text, host-native) |
| Database | SQLite (orchestrator), Neo4j (MiroFish knowledge graphs) |
| Claude Models | Opus (cross-system analysis, reports), Haiku (variant generation, MiroFish agents) |
Before using Edit, Write, or other file-changing tools, start work through a GSD command so planning artifacts and execution context stay in sync.
Use these entry points:
/gsd:quickfor small fixes, doc updates, and ad-hoc tasks/gsd:debugfor investigation and bug fixing/gsd:execute-phasefor planned phase work
Do not make direct repo edits outside a GSD workflow unless the user explicitly asks to bypass it.
Profile not yet configured. Run
/gsd:profile-userto generate your developer profile. This section is managed bygenerate-claude-profile-- do not edit manually.