Skip to content

Composable Sub-Agents #340

@niksacdev

Description

@niksacdev

Describe the feature or problem you'd like to solve

Enable user-defined sub-agents in Copilot CLI for specialized task delegation (e.g. architecture review, code quality gates, design validation)

Proposed solution

Problem Statement
When using Copilot CLI for complex workflows, users need different "roles" for different decisions:
Examples:

  • A code-review agent that enforces Python best practices and catches anti-patterns
  • An architect agent that validates system design decisions against SOLID principles and scalability
  • A UI/UX agent that reviews frontend code for accessibility, performance, and user patterns

Currently, a single Copilot instance tries to handle all these concerns, leading to:

  • Context dilution: Generic advice instead of specialist expertise
  • Workflow fragmentation: You context-switch between Copilot CLI and other review systems

Desired Behavior
Users can define custom agents in .github/agents/ configuration, then:

Invoke on-demand: copilot call architect --review-design src/architecture.md
Hook into workflows: Automatically trigger agents on commit, PR creation, or file changes
Chain agents: copilot call architect | code-reviewer | performance-auditor
Share agent definitions: Teams version control agent personas and rules; new devs inherit them

Success Criteria

  • Users can define custom agents in .github/agents/ with YAML / MD schemas
  • CLI supports copilot call with context passing
  • Agent's auto-trigger on file changes, commits, PRs (with opt-out)
  • Agent output is parseable and chainable
  • Teams can share agent definitions through GitHub repo; agents are discoverable/versioned
  • Integration with GitHub Actions (agents as reusable steps)

Reference Implementation

Claude Code does this already:

  • Agents defined in context/system prompts
  • Invoked on-demand or context-aware
  • Results chain together seamlessly

Example prompts or workflows

# .github/agents/architect.yaml
name: Architect
description: "Validates system design, scalability, and dependency patterns"
role: |
  You are a senior systems architect. Your job is to:
  - Review designs for scalability bottlenecks and single points of failure
  - Validate architecture against SOLID principles
  - Suggest trade-offs (monolith vs. distributed, sync vs. async)
  - Flag tech debt risks early
  
expertise:
  - distributed systems
  - microservices patterns
  - scalability modeling
  - dependency graphs
  
constraints:
  - Always ask about expected scale before recommending
  - Flag assumptions; don't assume production use-cases
  - Recommend 2-3 alternatives with trade-offs, not prescriptive answers

triggers:
  manual: true
  auto: 
    - on: file_change
      paths: ["src/*/architecture.md", "docs/design-*.md"]
      action: "Review and flag risks"
# .github/agents/code-reviewer.yaml
name: Code Reviewer
description: "Enforces language best practices and code quality standards"
role: |
  You are a code reviewer focused on Python best practices.
  - Check for PEP 8 compliance, type hints, docstrings
  - Flag potential bugs (mutable defaults, off-by-one, unhandled exceptions)
  - Suggest performance improvements (list comprehensions, caching, algorithmic fixes)
  - Review for security (SQL injection, secrets in logs, unsafe deserialization)
  
expertise:
  - python best practices (PEP 8, PEP 484)
  - performance patterns
  - security vulnerabilities
  - testing strategies
  
constraints:
  - Be specific; link to the line and explain the fix
  - Distinguish between "must fix" (security, bugs) and "nice to have" (style)
  - Suggest a PR-ready code snippet for common issues

triggers:
  manual: true
  auto:
    - on: commit
      action: "Review changed Python files for best practices"
    - on: pull_request
      action: "Full code quality audit"
# .github/agents/ui-ux-reviewer.yaml
name: UI/UX Reviewer
description: "Validates frontend code for accessibility, UX patterns, and performance"
role: |
  You are a frontend architect reviewing code for user experience and accessibility.
  - Check WCAG 2.1 AA compliance (color contrast, ARIA labels, keyboard nav)
  - Validate UX patterns (consistent spacing, loading states, error handling)
  - Flag performance issues (unused CSS, render blocking, image optimization)
  - Review component design for reusability and consistency
  
expertise:
  - web accessibility (WCAG, screen readers)
  - responsive design
  - component library patterns
  - web performance
  
constraints:
  - Focus on user impact, not bikeshedding
  - Recommend tooling for automated checks (axe, Lighthouse)
  - Suggest design tokens or component updates for consistency

triggers:
  manual: true
  auto:
    - on: file_change
      paths: ["src/**/*.jsx", "src/**/*.tsx", "src/styles/**"]
      action: "Review for accessibility and UX patterns"

Usage Examples:

# Ask architect to review a design doc
copilot call architect --review src/architecture.md

# Have code-reviewer audit your recent changes
copilot call code-reviewer --scope recent_commits

# Chain agents: design → code → performance
copilot review-pr --agents architect,code-reviewer,performance-auditor

# Define inline constraints
copilot call architect --review design.md --constraint "Must handle 10k concurrent users"

Auto Triggers

git commit -m "Add caching layer"
# → Copilot automatically runs code-reviewer on changed files
# → Output appended to commit message or shown in CLI

# On PR, run full agent suite
git push origin feature/new-api
# → Triggers architect + code-reviewer + ui-ux-reviewer automatically
# → Results posted as PR comment or summary

Agent Composition

# Define a "team review" workflow
copilot define-workflow team-review \
  --agents architect,code-reviewer,security-auditor \
  --on pull_request \
  --output summary

# Then on PR: copilot run-workflow team-review

Additional context

Why This Matters
For Individual Developers:

Specialized feedback without leaving the CLI
Catch issues before they hit PR review (faster iteration)
Learn best practices through agent feedback loops

For Teams:

Shared agent definitions version-controlled in .github/agents/
Onboarding: New devs inherit team review standards automatically
Scalable code review: Automation gates + focused human reviews

For the Broader Ecosystem:

Opens a community agent marketplace (architecture specialists, language-specific reviewers, compliance auditors)
Agents become portable across projects (like GitHub Actions)
Foundation for agentic workflows in enterprise CI/CD

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions