Stop losing context. Start building faster.
An orchestration framework for AI coding assistants that solves context pollution and token exhaustion - enabling your AI to work on complex projects without running out of memory.
AI assistants suffer from context pollution - a well-documented challenge where model accuracy degrades as token count increases. This "context rot" stems from transformer architecture's quadratic attention mechanism, where each token must maintain pairwise relationships with all others.
The Impact: As your AI works on complex features, it accumulates conversation history, tool outputs, and code examples. By task 10-15, the context window fills with 200k+ tokens. The model loses focus, forgets earlier decisions, and eventually fails. You're forced to restart sessions and spend 30-60 minutes rebuilding context just to continue.
Industry Validation: Anthropic's research on context management confirms production AI agents "exhaust their effective context windows" on long-running tasks, requiring active intervention to prevent failure.
Traditional approaches treat context windows like unlimited memory. Task Orchestrator recognizes they're a finite resource that must be managed proactively.
Task Orchestrator implements industry-recommended patterns from Anthropic's context engineering research: persistent external memory, summary-based context passing, and sub-agent architectures with clean contexts.
How it works:
- Persistent memory (SQLite) stores project state outside context windows
- Summary-based passing - Tasks create 300-500 token summaries instead of passing 5-10k full contexts
- Sub-agent isolation - Specialists work with clean contexts, return condensed results
- Just-in-time loading - Fetch only what's needed for current work
Result: Scale to 50+ tasks without hitting context limits. Up to 90% token reduction (matching Anthropic's 84% benchmark). Zero time wasted rebuilding context.
- β Persistent Memory - AI remembers project state, completed work, and decisions across sessions
- β Token Efficiency - Up to 90% reduction via summary-based context passing
- β Hierarchical Tasks - Projects β Features β Tasks with dependency tracking
- β Template System - 9 built-in workflow templates with decision frameworks and quality gates
- β Event-Driven Workflows - Automatic status progression based on your config
- β Sub-Agent Orchestration - Specialist routing for complex work (Claude Code)
- β Skills & Hooks - Lightweight coordination and workflow automation (Claude Code)
- β MCP Protocol Support - Core persistence and task management work with any MCP client
π Deep dive: See Agent Architecture Guide for token efficiency comparison and Developer Architecture for technical details.
Easiest way - Install everything (MCP server, skills, subagents, hooks) in one step:
-
Clone this repository:
git clone https://github.com/jpicklyk/task-orchestrator.git cd task-orchestrator -
Add the local marketplace:
/plugin marketplace add ./ -
Install the plugin:
/plugin install task-orchestrator@task-orchestrator-marketplace -
Restart Claude Code
-
Initialize your project:
setup_project
Note: Once this repository is published on GitHub, you'll be able to use:
/plugin marketplace add jpicklyk/task-orchestrator
/plugin install task-orchestrator
See Plugin Installation Guide for detailed instructions and troubleshooting.
For other MCP clients or custom setup:
-
Install via Docker:
docker pull ghcr.io/jpicklyk/task-orchestrator:latest
-
Configure your AI platform:
Claude Code:
claude mcp add-json task-orchestrator '{"type":"stdio","command":"docker","args":["run","--rm","-i","-v","mcp-task-data:/app/data","-v",".:/project","-e","AGENT_CONFIG_DIR=/project","ghcr.io/jpicklyk/task-orchestrator:latest"]}'This single command works across all platforms (macOS, Linux, Windows).
Other MCP clients: Task Orchestrator's core MCP protocol (persistent memory, task management) works with any MCP client, but advanced features (skills, subagents, hooks) are Claude Code-specific. See Installation Guide for configuration.
First time setup - Initialize your AI with Task Orchestrator patterns:
"Run the initialize_task_orchestrator workflow"
This writes Task Orchestrator patterns to your AI's permanent memory (CLAUDE.md, .cursorrules, etc.)
Project setup - Initialize your project with configuration:
"Run setup_project to initialize Task Orchestrator"
Quick reference - View essential patterns anytime:
"Show me the getting_started guide"
That's it! Your AI can now create and manage tasks with persistent memory.
π Complete setup: Quick Start Guide - Includes sub-agent setup, templates, and first feature walkthrough.
Your AI remembers project state, completed work, and technical decisions - even after restarting. No more re-explaining your codebase every morning.
Build features with 10+ tasks without hitting context limits. Traditional approaches fail at 12-15 tasks. Task Orchestrator scales to 50+ tasks effortlessly.
Database β Backend β Frontend β Testing workflows with automatic context passing. Each specialist sees only what they need, not everything.
Multiple AI agents work in parallel without conflicts. Built-in concurrency protection and dependency management.
Capture bugs and improvements as you find them. Organize work without losing track of what needs fixing.
1. Hierarchical Task Management
Project: E-Commerce Platform
βββ Feature: User Authentication
βββ Task: Database schema [COMPLETED]
βββ Task: Login API [IN-PROGRESS]
βββ Task: Password reset [PENDING]
βββ Task: API docs [PENDING] [BLOCKED BY: Login API]
2. Summary-Based Context Passing
Instead of passing 5,000 tokens of full task details, specialists create 300-500 token summaries:
### Completed
Created Users table with authentication fields (id, email, password_hash).
Added indexes for email lookup.
### Files Changed
- db/migration/V5__create_users.sql
- src/model/User.kt
### Next Steps
API endpoints can use this schema for authenticationResult: Up to 92% token reduction per dependency. This implements Anthropic's "compaction" pattern - preserving critical information while discarding redundant details.
3. Event-Driven Workflows
Tasks progress automatically based on workflow events:
work_startedβ Task moves to in-progressimplementation_completeβ Task moves to testingtests_passedβ Task completesall_tasks_completeβ Feature moves to testing
All status transitions validated by your config in .taskorchestrator/config.yaml.
π Learn more: Status Progression Guide and Workflow Prompts
Task Orchestrator follows a Plan β Orchestrate β Execute pattern that prevents context pollution:
Start with either:
- Plan file: Create a markdown/text file with your feature description, requirements, and context
- Conversation context: Describe your feature directly in conversation
Example:
# User Authentication Feature
Build complete authentication system with login, signup, and password reset.
Requirements:
- JWT-based authentication
- Password hashing with bcrypt
- Email verification
- Rate limiting on login attemptsUse the coordinate_feature_development workflow (Claude Code):
"Run coordinate_feature_development with my plan file"
What happens:
- Feature Architect (Opus) analyzes your plan β Creates feature with rich context
- Planning Specialist (Sonnet) breaks down feature β Creates dependency-aware tasks
- Returns structured feature ready for execution
Result: Feature with 5-15 tasks, proper templates, clear dependencies, appropriate specialist tags.
AI automatically:
- Routes tasks to specialists (Implementation Specialist (Haiku) by default, Senior Engineer (Sonnet) for complex issues)
- Respects dependency chains (database β API β frontend)
- Passes 300-500 token summaries between tasks (not 5k+ full contexts)
- Triggers status events as work progresses
Default Specialists:
- Implementation Specialist (Haiku) - General implementation tasks (fast, cost-efficient)
- Senior Engineer (Sonnet) - Complex debugging, architecture, unblocking
Custom Specialists (optional via .taskorchestrator/agent-mapping.yaml):
- Backend Engineer, Frontend Developer, Database Engineer, Test Engineer, Technical Writer
- See Agent Architecture Guide for configuration
Your role: Just say "What's next?" and the AI handles routing, dependencies, and coordination.
π‘ Pro Tip: The Task Orchestrator communication style plugin is automatically active in Claude Code for clearer coordination (uses phase labels, status indicators β
β οΈ βπ, and concise progress updates) when installed via the plugin marketplace.
Task Orchestrator uses event-driven status progression mapped to your workflow:
- Default statuses: PENDING β IN_PROGRESS β COMPLETED (customizable in
.taskorchestrator/config.yaml) - Event triggers: Work completion, test passing, review approval automatically progress status
- Workflow types: Default, bug_fix, documentation flows with different status sequences
- Cascade effects: Task completion can trigger feature status changes
Configuration: .taskorchestrator/config.yaml defines:
- Valid status transitions for each entity type (task, feature, project)
- Workflow flows (default, bug_fix, documentation)
- Event mappings (which events trigger which status changes)
- Prerequisites for status progression (e.g., "can't complete until all tasks done")
π Deep dive: Status Progression Guide for complete configuration reference and workflow examples.
- π Quick Start Guide - Complete setup walkthrough
- π§ Installation Guide - Platform-specific configuration
- π€ AI Guidelines - How AI uses Task Orchestrator autonomously
- π€ Agent Architecture - 4-tier hybrid system: Direct Tools, Skills, Hooks, Subagents
- π― Skills Guide - Lightweight coordination (60-82% token savings)
- πͺ Hooks Guide - Workflow automation and event-driven integration
- π Templates - 9 built-in workflow templates (instructions, frameworks, quality gates)
- π Workflow Prompts - Automated workflow guidance
- π§ API Reference - Complete MCP tools documentation
- ποΈ Architecture - Technical deep-dive
- π Troubleshooting - Solutions to common issues
- π¨βπ» Contributing Guidelines - Development setup
- ποΈ Database Migrations - Schema management
- π¬ Community Wiki - Examples, tips, and guides
| Feature | Claude Code | Other MCP Clients |
|---|---|---|
| Persistent Memory | β Tested & Supported | β MCP Protocol Support |
| Template System | β Tested & Supported | β MCP Protocol Support |
| Task Management | β Tested & Supported | β MCP Protocol Support |
| Sub-Agent Orchestration | β Tested & Supported | β Claude Code-specific |
| Skills (Lightweight Coordination) | β Tested & Supported | β Claude Code-specific |
| Hooks (Workflow Automation) | β Tested & Supported | β Claude Code-specific |
| Status Event System | β Tested & Supported | β MCP Protocol Support |
Primary Platform: Claude Code is the primary tested and supported platform with full feature access including skills, subagents, and hooks.
Other MCP Clients: The core MCP protocol (persistent memory, task management, templates, status events) works with any MCP client, but we cannot verify functionality on untested platforms. Advanced orchestration features (skills, subagents, hooks) require Claude Code's .claude/ directory structure.
Claude Code (Full Orchestration):
You: "I have a plan for user authentication in plan.md"
AI: "Loading Feature Orchestration Skill..."
"Launching Feature Architect (Opus) with plan file..."
β Feature created with 8 tasks
"Launching Planning Specialist (Sonnet)..."
β Tasks broken down with dependencies
You: "What's next?"
AI: "Task 1: Database schema [PENDING]. No blockers."
Launches Implementation Specialist β Implements schema β Creates 400-token summary
You: "What's next?"
AI: "Task 2: Authentication API [PENDING]. Dependencies satisfied."
Reads 400-token summary (not 5k full context)
Launches Implementation Specialist β Implements API β Creates summary
You: "What's next?"
AI: "Task 3: Login UI [PENDING]. Backend ready."
Launches Implementation Specialist β Implements UI β Feature progresses
[Next morning - new session]
You: "What's next?"
AI: "Task 4: Integration tests [PENDING]. 3 tasks completed yesterday."
No context rebuilding - AI remembers everything from persistent memory
Key Benefits:
- Zero manual routing:
coordinate_feature_developmenthandles specialist selection - Automatic dependency tracking: AI only suggests tasks with satisfied dependencies
- Persistent memory: New sessions start instantly with full context
- Token efficiency: 400-token summaries instead of 5k+ full contexts
Quick Fixes:
- AI can't find tools: Restart your AI client
- Docker not running: Start Docker Desktop, verify with
docker version - Connection problems: Enable
MCP_DEBUG=truein Docker config - Skills/Sub-agents not available: Install via plugin marketplace (requires Claude Code)
- coordinate_feature_development not found: Install plugin via marketplace for full orchestration features
Get Help:
- π Troubleshooting Guide - Comprehensive solutions
- π¬ Discussions - Ask questions and share ideas
- π Issues - Bug reports and feature requests
Built with modern, reliable technologies:
- Kotlin 2.2.0 with Coroutines for concurrent operations
- SQLite + Exposed ORM for fast, zero-config database (persistent memory system)
- Flyway Migrations for versioned schema management
- MCP SDK 0.7.2 for standards-compliant protocol
- Docker for one-command deployment
Architecture Validation: Task Orchestrator implements patterns recommended in Anthropic's context engineering research: sub-agent architectures, compaction through summarization, just-in-time context loading, and persistent external memory. Our approach prevents context accumulation rather than managing it after the fact.
ποΈ Architecture details: See Developer Guides
We welcome contributions! Task Orchestrator follows Clean Architecture with 4 distinct layers (Domain β Application β Infrastructure β Interface).
To contribute:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes with tests
- Submit a pull request
See Contributing Guidelines for detailed development setup.
- π Watch releases - Get notified of new versions
- π View changelog - See what's changed
Version format: {major}.{minor}.{patch}.{git-commit-count}-{qualifier}
Current versioning defined in build.gradle.kts.
MIT License - Free for personal and commercial use
AI coding tools, AI pair programming, Model Context Protocol, MCP server, Claude Code, Claude Desktop, AI task management, context persistence, AI memory, token optimization, RAG, AI workflow automation, persistent AI assistant, context pollution solution, AI orchestration, sub-agent coordination
Ready to build complex features without context limits?
docker pull ghcr.io/jpicklyk/task-orchestrator:latestThen follow the Quick Start Guide to configure your AI platform. π