feat: add elegance-pipeline plugin (Codex bundle -> Claude Code native)#156
feat: add elegance-pipeline plugin (Codex bundle -> Claude Code native)#156
Conversation
…e) (#156) - 4 scouts (sonnet), 2 judges (opus), 1 planner, 1 verifier, 1 gated implementer - Persistent state manager with stage gates and implementation signal - 3 commands (init, status, run), 1 skill, 5 agent definitions - All .codex/ paths rewritten to Claude plugin structure - Codex-Spark->sonnet, GPT-5.4->opus, --solution-file->--project-anchor - State moved to project-local .claude/elegance_pipeline/state/ - Fix marketplace.json version mismatches: hookify 0.2.0->0.2.1, metacognitive-guard 0.4.5->0.5.0
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (20)
Cache: Disabled due to data retention organization setting Knowledge base: Disabled due to data retention organization setting 📝 WalkthroughWalkthroughIntroduces the "elegance-pipeline" plugin, a multi-agent code-analysis system with scouts, judges, planner, verifier, and implementer agents coordinated via a Python state manager. Updates marketplace metadata and versioning for related plugins. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Pipeline as Pipeline Manager
participant Scouts
participant Judges
participant Planner
participant Verifier
participant Implementer
participant StateStore as State Store
User->>Pipeline: init (project_anchor, scopes)
Pipeline->>StateStore: create initial state
User->>Pipeline: run
Pipeline->>StateStore: load state
Pipeline->>Scouts: prompt (4 parallel)
Scouts-->>StateStore: submit results
Pipeline->>Judges: prompt (2 parallel)
Judges-->>StateStore: submit results
Pipeline->>Planner: prompt (sequential)
Planner-->>StateStore: submit tasks
Pipeline->>Verifier: prompt (sequential)
Verifier-->>StateStore: submit approval + signal
alt Verifier: READY
Pipeline->>Implementer: prompt (gated)
Implementer-->>StateStore: submit refactors
else Verifier: NOT READY
Pipeline->>User: stop (no implementation)
end
Pipeline->>User: workflow complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Claude Code plugin ecosystem by integrating a sophisticated multi-agent code elegance pipeline. This new plugin automates the process of identifying and addressing code quality issues, transforming a previously external workflow into a native, integrated experience. The changes also include crucial updates to plugin metadata, ensuring the marketplace accurately reflects the available tools and their versions. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new "elegance-pipeline" plugin, a significant feature involving a multi-agent workflow. However, several security vulnerabilities were identified, including a critical command injection risk in the orchestration instructions and potential prompt injection and security logic bypasses in the state manager, stemming from insecure handling of LLM-generated content. Additionally, critical logic bugs were found in the Python state manager ("pipeline.py") related to template rendering safety, state transition logic, and readiness checks, along with a minor formatting issue in "CHANGELOG.md".
| echo "<agent output>" | python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | ||
| --role <role> --slot <slot> --stdin | ||
| ``` |
There was a problem hiding this comment.
The procedure for running the pipeline instructs the orchestrator (Claude Code) to pipe agent output into the state manager using echo "<agent output>" | python .... This is a critical command injection vulnerability. If the agent output contains shell metacharacters (e.g., backticks, $(...), or semicolons), they will be executed by the shell when the command is constructed. Since agents can be manipulated or produce unexpected output, this allows for arbitrary command execution on the user's machine.
| echo "<agent output>" | python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | |
| --role <role> --slot <slot> --stdin | |
| ``` | |
| 4. After each agent completes, submit its result to the state manager using the --file argument to avoid shell injection: | |
| ```bash | |
| python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | |
| --role <role> --slot <slot> --file <path_to_agent_output_file> |
| if approved: | ||
| state.implementation_signal = True | ||
| state.verifier_signal_source = slot | ||
| else: | ||
| state.implementation_signal = False | ||
| state.verifier_signal_source = slot |
There was a problem hiding this comment.
This logic for setting the implementation_signal has a subtle bug. If approved is None (i.e., the verifier's output did not contain a clear 'yes' or 'no'), the else branch is executed, which incorrectly sets state.implementation_signal to False. The signal should only be updated when a definitive decision is found. The current implementation could prematurely block the pipeline on ambiguous output. Additionally, the assignment to verifier_signal_source is duplicated.
| if approved: | |
| state.implementation_signal = True | |
| state.verifier_signal_source = slot | |
| else: | |
| state.implementation_signal = False | |
| state.verifier_signal_source = slot | |
| if approved is not None: | |
| state.implementation_signal = approved | |
| state.verifier_signal_source = slot |
| if record.slot not in ready and record.status != "submitted": | ||
| reasons = { | ||
| "judge": "All 4 scouts must be submitted first.", | ||
| "planner": "Both judges must be submitted first.", | ||
| "verifier": "Planner must be submitted first.", | ||
| "implementer": "Verifier must approve implementation first or set signal manually.", | ||
| } | ||
| raise SystemExit(reasons.get(record.role, "This slot is not ready yet.")) |
There was a problem hiding this comment.
There's a logic flaw in _assert_ready. It incorrectly allows generating a new prompt for a pipeline stage that has already been completed. The condition and record.status != "submitted" bypasses the readiness check for submitted slots, which could lead to re-running parts of the pipeline unnecessarily. The check should fail for any slot that is not in the ready set, with a more specific error message for already-submitted slots.
| if record.slot not in ready and record.status != "submitted": | |
| reasons = { | |
| "judge": "All 4 scouts must be submitted first.", | |
| "planner": "Both judges must be submitted first.", | |
| "verifier": "Planner must be submitted first.", | |
| "implementer": "Verifier must approve implementation first or set signal manually.", | |
| } | |
| raise SystemExit(reasons.get(record.role, "This slot is not ready yet.")) | |
| if record.slot not in ready: | |
| if record.status == "submitted": | |
| raise SystemExit(f"Slot {record.slot} has already been submitted.") | |
| reasons = { | |
| "judge": "All 4 scouts must be submitted first.", | |
| "planner": "Both judges must be submitted first.", | |
| "verifier": "Planner must be submitted first.", | |
| "implementer": "Verifier must approve implementation first or set signal manually.", | |
| } | |
| raise SystemExit(reasons.get(record.role, f"This slot is not ready yet: {record.slot}")) |
| def render(self, name: str, context: Dict[str, str]) -> str: | ||
| template_path = TEMPLATE_DIR / f"{name}.md" | ||
| if not template_path.exists(): | ||
| raise SystemExit(f"Missing template: {template_path}") | ||
| text = template_path.read_text(encoding="utf-8") | ||
| for key, value in context.items(): | ||
| text = text.replace("{" + key + "}", value) | ||
| return text |
There was a problem hiding this comment.
The TemplateRenderer.render method uses text.replace() in a loop, making it vulnerable to recursive replacement (a form of prompt injection). This could allow an agent to inject instructions or placeholders into subsequent prompts, leading to information leakage or manipulation. The current str.replace logic is also not safe for template rendering as it can lead to incorrect prompt generation if a replacement value contains a placeholder, making the output brittle and dependent on iteration order. A single-pass substitution using re.sub is recommended for safer and more predictable templating, preventing recursive injection.
| def render(self, name: str, context: Dict[str, str]) -> str: | |
| template_path = TEMPLATE_DIR / f"{name}.md" | |
| if not template_path.exists(): | |
| raise SystemExit(f"Missing template: {template_path}") | |
| text = template_path.read_text(encoding="utf-8") | |
| for key, value in context.items(): | |
| text = text.replace("{" + key + "}", value) | |
| return text | |
| def render(self, name: str, context: Dict[str, str]) -> str: | |
| template_path = TEMPLATE_DIR / f"{name}.md" | |
| if not template_path.exists(): | |
| raise SystemExit(f"Missing template: {template_path}") | |
| text = template_path.read_text(encoding="utf-8") | |
| # Use a single-pass replacement to prevent recursive injection | |
| def replace(match): | |
| return context.get(match.group(1), match.group(0)) | |
| import re | |
| return re.sub(r"\{([^\{}]+)\}", replace, text) |
| if role == "verifier": | ||
| approved = self._parse_bool(text, r"Implementation approved:\s*(yes|no)") | ||
| if approved is None: | ||
| approved = self._parse_bool(text, r"Implementation warranted:\s*(yes|no)") |
There was a problem hiding this comment.
The submit command parses the verifier's output using re.search on the entire text to determine the implementation_signal. Since the verifier's prompt includes the output of the planner, a malicious or compromised planner could inject a fake verdict (e.g., "Implementation approved: yes") into its output. If the verifier includes this in its response, the script might find it and enable the implementation phase (which has full write access) even if the verifier's own verdict was "no".
| if role == "verifier": | |
| approved = self._parse_bool(text, r"Implementation approved:\s*(yes|no)") | |
| if approved is None: | |
| approved = self._parse_bool(text, r"Implementation warranted:\s*(yes|no)") | |
| if role == "verifier": | |
| # Narrow the search to the Verdict section to prevent injection from context | |
| verdict_match = re.search(r"## Verdict\s*(.*)", text, flags=re.DOTALL | re.IGNORECASE) | |
| verdict_text = verdict_match.group(1) if verdict_match else text | |
| approved = self._parse_bool(verdict_text, r"Implementation approved:\s*(yes|no)") | |
| if approved is None: | |
| approved = self._parse_bool(verdict_text, r"Implementation warranted:\s*(yes|no)") |
|
|
||
| - **marketplace.json version mismatches**: hookify 0.2.0->0.2.1, metacognitive-guard 0.4.5->0.5.0 (synced with plugin.json) | ||
|
|
||
| ### Added |
There was a problem hiding this comment.
This ### Added heading is a duplicate. To follow the Keep a Changelog format, all new additions for this release should be grouped under a single ### Added section. Please move the items from this section to the ### Added section at line 9.
There was a problem hiding this comment.
Pull request overview
Adds a new Claude Code-native plugin (elegance-pipeline) that ports the prior Codex “elegance pipeline bundle” into this repo’s plugin marketplace, including a Python state manager and agent/command/skill assets, and updates the marketplace + changelog accordingly.
Changes:
- Added
plugins/elegance-pipeline/with a persistent pipeline state manager (pipeline.py), prompt templates, 5 agent definitions, 3 commands, and 1 skill. - Updated
.claude-plugin/marketplace.jsonto add the new plugin and to sync hookify/metacognitive-guard versions. - Updated
CHANGELOG.mdand.gitignoreto document/ignore the new plugin’s runtime state.
Reviewed changes
Copilot reviewed 19 out of 20 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| plugins/elegance-pipeline/skills/elegance-pipeline/SKILL.md | Adds a skill definition describing the pipeline workflow and how to orchestrate it. |
| plugins/elegance-pipeline/elegance_pipeline/templates/scout.md | Adds scout role prompt template. |
| plugins/elegance-pipeline/elegance_pipeline/templates/judge.md | Adds judge role prompt template. |
| plugins/elegance-pipeline/elegance_pipeline/templates/planner.md | Adds planner role prompt template. |
| plugins/elegance-pipeline/elegance_pipeline/templates/verifier.md | Adds verifier role prompt template (controls gate). |
| plugins/elegance-pipeline/elegance_pipeline/templates/implementer.md | Adds implementer role prompt template (execution protocol). |
| plugins/elegance-pipeline/elegance_pipeline/pipeline.py | Introduces the Python workflow state manager (init/status/prompt/submit/signal). |
| plugins/elegance-pipeline/commands/init.md | Adds a slash command to initialize pipeline state. |
| plugins/elegance-pipeline/commands/status.md | Adds a slash command to show pipeline status. |
| plugins/elegance-pipeline/commands/run.md | Adds a slash command intended to run the next stage via orchestration steps. |
| plugins/elegance-pipeline/agents/elegance-scout.md | Adds scout agent definition. |
| plugins/elegance-pipeline/agents/elegance-judge.md | Adds judge agent definition. |
| plugins/elegance-pipeline/agents/elegance-planner.md | Adds planner agent definition. |
| plugins/elegance-pipeline/agents/elegance-verifier.md | Adds verifier agent definition. |
| plugins/elegance-pipeline/agents/elegance-implementer.md | Adds implementer agent definition. |
| plugins/elegance-pipeline/README.md | Documents purpose, setup, usage, and state location for the plugin. |
| plugins/elegance-pipeline/.claude-plugin/plugin.json | Adds plugin manifest registering commands and agents. |
| .claude-plugin/marketplace.json | Adds the new plugin and bumps versions for hookify/metacognitive-guard. |
| CHANGELOG.md | Documents the new plugin and marketplace version sync changes. |
| .gitignore | Ignores project-local elegance pipeline state directory and Codex backup artifacts. |
| if role == "verifier": | ||
| approved = self._parse_bool(text, r"Implementation approved:\s*(yes|no)") | ||
| if approved is None: | ||
| approved = self._parse_bool(text, r"Implementation warranted:\s*(yes|no)") | ||
| if approved: | ||
| state.implementation_signal = True | ||
| state.verifier_signal_source = slot | ||
| else: | ||
| state.implementation_signal = False | ||
| state.verifier_signal_source = slot |
There was a problem hiding this comment.
Verifier decision parsing: if neither "Implementation approved:" nor "Implementation warranted:" is found, approved stays None and the code falls into the else branch, silently forcing implementation_signal = False. This should fail loudly (exit with an error) so a malformed verifier output can’t accidentally block (or mis-set) the implementation gate.
|
|
||
| SCRIPT_DIR = Path(__file__).resolve().parent | ||
| TEMPLATE_DIR = SCRIPT_DIR / "templates" | ||
| STATE_DIR = Path.cwd() / ".claude" / "elegance_pipeline" / "state" | ||
| CONFIG_PATH = STATE_DIR / "config.json" | ||
| STATE_PATH = STATE_DIR / "workflow_state.json" | ||
| PIPELINE_CMD = f"python {SCRIPT_DIR / 'pipeline.py'}" |
There was a problem hiding this comment.
PIPELINE_CMD hardcodes python and doesn’t quote the script path. This can break in environments where python isn’t on PATH, where Python 2 is default, or when the plugin path contains spaces. Prefer building the command from sys.executable and shell-quoting the path (or emit a list/JSON form if supported).
| SCRIPT_DIR = Path(__file__).resolve().parent | |
| TEMPLATE_DIR = SCRIPT_DIR / "templates" | |
| STATE_DIR = Path.cwd() / ".claude" / "elegance_pipeline" / "state" | |
| CONFIG_PATH = STATE_DIR / "config.json" | |
| STATE_PATH = STATE_DIR / "workflow_state.json" | |
| PIPELINE_CMD = f"python {SCRIPT_DIR / 'pipeline.py'}" | |
| import shlex | |
| SCRIPT_DIR = Path(__file__).resolve().parent | |
| TEMPLATE_DIR = SCRIPT_DIR / "templates" | |
| STATE_DIR = Path.cwd() / ".claude" / "elegance_pipeline" / "state" | |
| CONFIG_PATH = STATE_DIR / "config.json" | |
| STATE_PATH = STATE_DIR / "workflow_state.json" | |
| PIPELINE_CMD = f"{shlex.quote(sys.executable)} {shlex.quote(str(SCRIPT_DIR / 'pipeline.py'))}" |
| --- | ||
| description: Run the next ready stage of the elegance pipeline | ||
| disable-model-invocation: true | ||
| allowed-tools: Read, Grep, Glob, Bash, Agent | ||
| --- |
There was a problem hiding this comment.
allowed-tools includes Agent, but this repo’s orchestration commands consistently use the Task tool to spawn subagents (and no other command declares an Agent tool). If this is meant to allow subagent spawning, switch to Task and update the command text accordingly.
| @@ -0,0 +1,14 @@ | |||
| --- | |||
| description: Show elegance pipeline workflow status | |||
| disable-model-invocation: true | |||
There was a problem hiding this comment.
disable-model-invocation: true is not used by other commands in this repo and may prevent Claude from following the instructions (this command asks to run Bash and then report the output to the user). Unless this is known-good for Claude Code commands, consider removing it.
| disable-model-invocation: true |
| @@ -0,0 +1,22 @@ | |||
| --- | |||
| description: Initialize the elegance pipeline for this repository | |||
| disable-model-invocation: true | |||
There was a problem hiding this comment.
disable-model-invocation: true is not used by other commands in this repo and may prevent the interactive behavior described below (asking the user for missing args). Unless this key is confirmed supported and intended here, consider removing it.
| disable-model-invocation: true |
|
|
||
| SCRIPT_DIR = Path(__file__).resolve().parent | ||
| TEMPLATE_DIR = SCRIPT_DIR / "templates" | ||
| STATE_DIR = Path.cwd() / ".claude" / "elegance_pipeline" / "state" |
There was a problem hiding this comment.
--project-root is stored in config and printed, but STATE_DIR is always computed from Path.cwd(). That means running the command from a different working directory will write state to the wrong place, and project_root has no functional effect. Consider deriving the state directory from cfg.project_root (or remove --project-root if it’s intentionally unused).
| STATE_DIR = Path.cwd() / ".claude" / "elegance_pipeline" / "state" | |
| _ENV_PROJECT_ROOT = os.environ.get("ELEGANCE_PIPELINE_PROJECT_ROOT") | |
| _BASE_DIR = Path(_ENV_PROJECT_ROOT) if _ENV_PROJECT_ROOT else Path.cwd() | |
| STATE_DIR = _BASE_DIR / ".claude" / "elegance_pipeline" / "state" |
| ], | ||
| "commands": "./commands", | ||
| "agents": [ | ||
| "./agents/elegance-scout.md", | ||
| "./agents/elegance-judge.md", | ||
| "./agents/elegance-planner.md", | ||
| "./agents/elegance-verifier.md", | ||
| "./agents/elegance-implementer.md" | ||
| ] |
There was a problem hiding this comment.
The plugin includes a skills/ directory, but plugin.json doesn’t declare it (other skill-bearing plugins use "skills": "./skills"). Without this, the elegance-pipeline skill may not be discoverable/loaded by Claude Code.
| @@ -0,0 +1,54 @@ | |||
| --- | |||
| description: Run the next ready stage of the elegance pipeline | |||
| disable-model-invocation: true | |||
There was a problem hiding this comment.
disable-model-invocation: true is not used by other commands in this repo and may prevent the command from actually being executed as an LLM-driven workflow (this command requires tool use + summarization). Unless this flag is confirmed supported/required, consider removing it to match the established command format here.
| disable-model-invocation: true |
| Pass `$ARGUMENTS` to control which stage to run: | ||
| - `scouts` — run only scout phase | ||
| - `judges` — run only judge phase | ||
| - `all` — run all phases sequentially (default) | ||
| - A specific slot like `scout-1` or `judge-2` | ||
|
|
There was a problem hiding this comment.
The ## Arguments section claims /elegance-pipeline:run can run phases sequentially based on $ARGUMENTS (e.g., all, scouts, a specific slot), but the command body only documents a manual procedure and the state manager doesn’t implement any “run stage” subcommand. Either implement argument-driven behavior (likely by adding a run command in pipeline.py) or adjust this section to avoid promising functionality that doesn’t exist.
| Pass `$ARGUMENTS` to control which stage to run: | |
| - `scouts` — run only scout phase | |
| - `judges` — run only judge phase | |
| - `all` — run all phases sequentially (default) | |
| - A specific slot like `scout-1` or `judge-2` | |
| This command currently documents a **manual procedure** for running the pipeline. | |
| Any `$ARGUMENTS` passed to `/elegance-pipeline:run` **do not change** the behavior described above; | |
| you must explicitly follow the status → prompt → subagent → submit loop. | |
| You MAY still use `$ARGUMENTS` as a human-facing note (for example, to remind yourself you are focusing | |
| on “scouts”, “judges”, or a specific slot), but pipeline execution is not automated by these values. |
| echo "<agent output>" | python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | ||
| --role <role> --slot <slot> --stdin |
There was a problem hiding this comment.
The example pipeline submission command echo "<agent output>" | python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit ... embeds untrusted agent output directly into a Bash command in double quotes, which still allows command substitution via $(...) or backticks and can lead to arbitrary command execution if the output contains shell metacharacters. An attacker who can influence agent output (e.g., via repository content that the agent echoes) could inject $(malicious_command) and have it executed when this pattern is followed. Adjust the orchestration pattern so agent output is passed to pipeline.py via stdin or a temporary file without going through shell interpolation (for example, by avoiding echo with inline output and using safer redirection or tooling APIs that pass data directly).
| echo "<agent output>" | python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | |
| --role <role> --slot <slot> --stdin | |
| # Recommended: save agent output to a file and pass it via stdin | |
| python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | |
| --role <role> --slot <slot> --stdin < agent_output.json | |
| # Or, for manual pasting, use a quoted heredoc to avoid shell interpolation: | |
| python ${CLAUDE_PLUGIN_ROOT}/elegance_pipeline/pipeline.py submit \ | |
| --role <role> --slot <slot> --stdin << 'EOF' | |
| <paste agent output here, then press Enter and Ctrl-D> | |
| EOF |
Summary
Converts the Codex
elegance-pipeline-bundle.zipinto a native Claude Code plugin.Pipeline
What changed
plugins/elegance-pipeline/: plugin manifest, 5 agents, 3 commands, 1 skill, pipeline.py state manager, 5 templates.codex/paths rewritten to${CLAUDE_PLUGIN_ROOT}--solution-file->--project-anchor(framework-agnostic).claude/elegance_pipeline/state/Also fixed
Validation
pipeline.py statusandpromptcommands verified workingSummary by CodeRabbit
New Features
Documentation