Before ANY task: search memory. After ANY success: store pattern. Non-negotiable.
// Search before working
mcp__claude-flow__memory_search({query: "[task keywords]", namespace: "patterns", limit: 10})
// Store after success
mcp__claude-flow__memory_store({namespace: "patterns", key: "[name]", value: "[what worked]"})ONLY MCP tools: mcp__claude-flow__memory_store/search/list/retrieve/stats
NEVER claude-flow memory * CLI.
mcp__claude-flow__memory_search({query: "[task keywords]", namespace: "patterns", limit: 10})
mcp__claude-flow__memory_search({query: "[task type]", namespace: "tasks", limit: 5})
Bash("claude-flow hooks route --task '[task description]'")mcp__claude-flow__memory_store({namespace: "patterns", key: "[pattern-name]", value: "[what worked]"})
Bash("claude-flow hooks post-edit --file '[main-file]' --train-neural true")
Bash("claude-flow hooks post-task --task-id '[id]' --success true --store-results true")Check memory before: new features, debugging, refactoring, performance work Store after: bug fixes, completions, performance fixes, security discoveries
| Trigger | Worker |
|---|---|
| Major refactor | optimize |
| New features | testgaps |
| Security changes | audit |
| API changes | document |
| 5+ file changes | map |
| Complex debug | deepdive |
Before starting any task, select the optimal orchestration approach. The decision tree below routes tasks to the right skill based on scope, complexity, and domain.
Don't know which skill? --> /route [describe your task] (unified dispatcher)
Single file, quick fix? --> Direct Edit (no skill needed)
Game dev (Godot/Unity/Unreal)? --> /game-dev
Bug/feature/review (single agent)? --> lazy-fetch blueprints
Multi-file feature with TDD? --> /build-with-quality
Swarm (3+ agents)? --> /swarm-advanced or hive-mind
GitHub ops (PR/release/CI)? --> github-* skills
Docs/reports? --> /report-builder, /docs-alignment
Media (image/video/3D)? --> /imagemagick, /ffmpeg, /blender, /comfyui
Browser automation? --> /playwright, /browser-automation
AI/ML (PyTorch, CUDA, notebooks)? --> /pytorch-ml, /cuda, /jupyter-notebooks
Memory/AgentDB? --> /agentdb-*, /lazy-fetch
Wardley maps / strategic analysis? --> /wardley-maps, /report-builder
UI/UX design? --> /ui-ux-pro-max-skill, /bencium-*, /design-audit, /typography
Architecture review? --> /vanity-engineering-review, /renaissance-architecture, /human-architect-mindset
Research + NotebookLM? --> /notebooklm, /perplexity-research, /gemini-url-context
Security / compliance? --> /defense-security
AEC (building architecture)? --> /studio [task]
SEO / content optimisation? --> /toprank
Full routing with all 72 active skills: see multi-agent-docker/skills/SKILL-DIRECTORY.md
| Skill | Agents | Scope | Memory | Best For |
|---|---|---|---|---|
| lazy-fetch | 1 (self) | Single session | RuVector bridge | Context discovery, plan tracking, blueprints, security scan |
| game-dev | 48 | Game project | Session state files | Game design, engine-specific code, team orchestration |
| ruflo/claude-flow | 1-15 | Multi-agent | RuVector native | Swarm coordination, complex features, cross-module work |
| build-with-quality | 111+ | QE pipeline | RuVector native | Testing, quality gates, coverage analysis |
| sparc-methodology | 5-8 | Full lifecycle | RuVector native | Specification through completion |
Skills compose. A single task may use multiple skills in sequence:
lazy gather "combat system"-- discover relevant files first/game-dev team-combat "melee attacks"-- orchestrate the game dev teamlazy check-- validate the implementationlazy remember "combat" "melee uses hitbox detection, not raycasts"-- persist
For ruflo swarms, each spawned agent uses worktree isolation:
- Agent spawns in worktree (
wt-add <name>for Ruflo,isolation: "worktree"for Agent tool) - Runs
lazy init+lazy gatherfor its task scope - Implements using lazy-fetch plan tracking
- Results merge back via ruflo coordination or git merge
For massive parallelisable work (migrations, bulk refactors), use /batch — it fans out to N worktree agents automatically.
- Search RuVector memory for task context
- Run
lazy readif working in a lazy-fetch-initialised project - Check hooks route for skill recommendation
- Select skill from decision tree above
- Begin work
| Tier | Handler | Latency | Cost | Use Cases |
|---|---|---|---|---|
| 1 | Agent Booster | <1ms | $0 | Simple transforms -- Skip LLM |
| 2 | Haiku | ~500ms | $0.0002 | Simple tasks, low complexity |
| 3 | Sonnet/Opus | 2-5s | $0.003-0.015 | Complex reasoning, security |
[AGENT_BOOSTER_AVAILABLE] -> Edit tool directly
[TASK_MODEL_RECOMMENDATION] -> use recommended model in Task tool
| Code | Task | Agents |
|---|---|---|
| 1 | Bug Fix | coordinator, researcher, coder, tester |
| 3 | Feature | coordinator, architect, coder, tester, reviewer |
| 5 | Refactor | coordinator, architect, coder, reviewer |
| 7 | Performance | coordinator, perf-engineer, coder |
| 9 | Security | coordinator, security-architect, auditor |
| 11 | Docs | researcher, api-docs |
Swarm: 3+ files, new features, cross-module, API+tests, security, performance, DB schema Skip: single file, 1-2 line fix, docs, config, questions
| Setting | Value |
|---|---|
| Topology | hierarchical-mesh |
| Max Agents | 15 |
| Strategy | specialized |
| Consensus | raft |
| Memory | hybrid |
| HNSW | Enabled |
| Neural | Enabled |
Task tool: ALL execution (agents, files, code, git)
CLI (Bash): claude-flow swarm init, claude-flow hooks *, claude-flow agent spawn
MCP (MANDATORY): mcp__claude-flow__memory_{store,search,list,retrieve}
See multi-agent-docker/CLAUDE.md for full RuVector connection details, schema,
SQL examples, and extension capabilities. Summary:
- Host:
ruvector-postgres:5432| DB:ruvector| Connection:$RUVECTOR_PG_CONNINFO - ALWAYS use MCP tools (
mcp__claude-flow__memory_*), never CLI - Tables:
memory_entries(1.17M+),patterns,reasoning_patterns,session_state
Topologies: hierarchical, mesh, hierarchical-mesh (recommended), adaptive
Strategies: byzantine (f<n/3), raft (f<n/2), gossip, crdt, quorum
claude-flow session restore --latest # Start
claude-flow hooks session-end --generate-summary true --persist-state true # End