-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Multi-Agent Council Orchestrator with Codegen Agent API #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Implements 3-stage council process using Codegen Agent API: - Stage 1: Generate N candidates from multiple models in parallel - Stage 2 (optional): Peer ranking with anonymized evaluation - Stage 3: Synthesis (simple or tournament-based for large councils) Features: - CLI command: codegen council run --prompt ... --models gpt-4o,claude-3-5-sonnet - Full tracking of agent run IDs and web URLs for all stages - Aggregate ranking calculation across all judges - Tests included with mocked agent runs Co-authored-by: Zeeeepa <[email protected]>
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 issues found across 7 files
Prompt for AI agents (all 2 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/codegen/council/orchestrator.py">
<violation number="1" location="src/codegen/council/orchestrator.py:158">
P2: Label generation with `chr(65 + i)` only produces valid letters A-Z for 26 candidates. With more candidates, non-letter characters are generated that won't match the `[A-Z]` regex pattern in `_parse_ranking_from_text`. Consider using multi-character labels (e.g., AA, AB) for larger councils.</violation>
<violation number="2" location="src/codegen/council/orchestrator.py:336">
P1: Tasks returned from `_launch_parallel_runs` are in completion order (due to `as_completed`), not submission order. When zipped with `run_configs`, this causes incorrect model attribution. Store the config with each task or preserve submission order.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
| ) -> Tuple[List[RankingResult], Dict[str, str]]: | ||
| """Stage 2: Each agent ranks the anonymized candidates.""" | ||
| # Create anonymous labels (Response A, Response B, etc.) | ||
| labels = [chr(65 + i) for i in range(len(candidates))] # A, B, C, ... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Label generation with chr(65 + i) only produces valid letters A-Z for 26 candidates. With more candidates, non-letter characters are generated that won't match the [A-Z] regex pattern in _parse_ranking_from_text. Consider using multi-character labels (e.g., AA, AB) for larger councils.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/codegen/council/orchestrator.py, line 158:
<comment>Label generation with `chr(65 + i)` only produces valid letters A-Z for 26 candidates. With more candidates, non-letter characters are generated that won't match the `[A-Z]` regex pattern in `_parse_ranking_from_text`. Consider using multi-character labels (e.g., AA, AB) for larger councils.</comment>
<file context>
@@ -0,0 +1,504 @@
+ ) -> Tuple[List[RankingResult], Dict[str, str]]:
+ """Stage 2: Each agent ranks the anonymized candidates."""
+ # Create anonymous labels (Response A, Response B, etc.)
+ labels = [chr(65 + i) for i in range(len(candidates))] # A, B, C, ...
+ label_to_model = {
+ f"Response {label}": cand.model
</file context>
| future = executor.submit(agent.run, prompt) | ||
| future_to_config[future] = (model, prompt) | ||
|
|
||
| for future in as_completed(future_to_config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P1: Tasks returned from _launch_parallel_runs are in completion order (due to as_completed), not submission order. When zipped with run_configs, this causes incorrect model attribution. Store the config with each task or preserve submission order.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/codegen/council/orchestrator.py, line 336:
<comment>Tasks returned from `_launch_parallel_runs` are in completion order (due to `as_completed`), not submission order. When zipped with `run_configs`, this causes incorrect model attribution. Store the config with each task or preserve submission order.</comment>
<file context>
@@ -0,0 +1,504 @@
+ future = executor.submit(agent.run, prompt)
+ future_to_config[future] = (model, prompt)
+
+ for future in as_completed(future_to_config):
+ try:
+ task = future.result()
</file context>
Co-authored-by: Zeeeepa <[email protected]>
🏛️ Multi-Agent Council Orchestrator
Implements a powerful multi-agent collaboration system using the Codegen Agent API, following the patterns from
llm-counciland OpenAI's Pro Mode.What This PR Adds
Core Council System (3-stage process):
Stage 1: Parallel Candidate Generation
Stage 2: Peer Ranking (optional)
Stage 3: Synthesis
Usage
Key Features
✅ Codegen Agent API Integration
Agent.run()andAgentTaskinfrastructure✅ Parallel Execution
✅ Rich CLI Output
✅ Tournament Synthesis
✅ Test Coverage
Files Added
src/codegen/council/__init__.py- Module exportssrc/codegen/council/models.py- Data models (AgentConfig, CouncilConfig, CouncilResult, etc.)src/codegen/council/orchestrator.py- Core orchestration logic (503 lines)src/codegen/cli/commands/council/main.py- CLI command implementationtests/council/test_orchestrator.py- Unit testsFiles Modified
src/codegen/cli/cli.py- Added council_app to main CLIArchitecture Decisions
Codegen Agent API Only (not external providers)
Synchronous with Polling (not async/streaming)
Structured Prompt Engineering
Future Enhancements (not in this PR)
Testing
Related
Based on patterns from:
Ready for review! This is Phase 1 of the multi-agent upgrade. Chain runner and recipes will follow in separate PRs.
💻 View my work • 👤 Initiated by @Zeeeepa • About Codegen
⛔ Remove Codegen from PR • 🚫 Ban action checks
Summary by cubic
Adds a multi-agent council orchestrator using the Codegen Agent API and a new codegen council CLI. It generates candidates in parallel, optionally ranks them, and synthesizes a final answer (simple or tournament) to improve results on complex prompts.
New Features
Migration
Written for commit fda8dac. Summary will update automatically on new commits.