feat(skills): add agent-sort skill for evidence-based ECC installation#981
feat(skills): add agent-sort skill for evidence-based ECC installation#981Kavya071 wants to merge 1 commit intoaffaan-m:mainfrom
Conversation
Launches 6 parallel agents that read every ECC item and search the actual codebase for matching languages, frameworks, imports, and file types. Sorts into DAILY (loaded every session) vs LIBRARY (zero tokens until triggered). - Agent 1: Agents (28 files) — matches by language/framework - Agent 2: Skills (125 folders) — matches by dependencies/patterns - Agent 3: Commands (60 files) — general vs language-specific - Agent 4: Rules (65 files) — matches by file extensions - Agent 5: Hooks (24 scripts) — checks for Prettier/TS/ESLint/OS compat - Agent 6: Extras (contexts, guides) — always daily or reference Includes automatic router creation, stale rule cleanup, and verification. Tested end-to-end on a React Native + TypeScript + Supabase production app. Resolves affaan-m#916
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
📝 WalkthroughWalkthroughAdded Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Controller as Agent-Sort Controller
participant Agents as Parallel Agents<br/>(6x)
participant Compiler as Result Compiler
participant FileSystem as File System
User->>Controller: Initiate Agent-Sort (verify ECC checkout)
Controller->>Agents: Dispatch 6 agents in parallel
par Parallel Execution
Agents->>Agents: AGENTS: Read agents & search repo
Agents->>Agents: SKILLS: Read skills & search repo
Agents->>Agents: COMMANDS: Read commands & search repo
Agents->>Agents: RULES: Read rules & search repo
Agents->>Agents: HOOKS: Read hooks & search repo
Agents->>Agents: EXTRAS: Read extras & search repo
end
Agents->>Compiler: Return categorized outputs<br/>(DAILY vs LIBRARY, INSTALL vs SKIP)
Compiler->>FileSystem: Create required directories
Compiler->>FileSystem: Copy/map artifacts deterministically<br/>(daily items, library references, rules, hooks/scripts)
Compiler->>FileSystem: Create routers for library discovery
Compiler->>Controller: Compilation complete
Controller->>User: Report verification status<br/>(counts, router/rules/hook status)
User->>Controller: Run cleanup<br/>(remove stale rule directories)
Controller->>FileSystem: Remove stale artifacts
Controller->>User: Cleanup complete
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related issues
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
5 issues found across 1 file
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="skills/agent-sort/SKILL.md">
<violation number="1" location="skills/agent-sort/SKILL.md:30">
P1: Setup docs instruct cloning external source and installing executable hooks/scripts from it, creating a supply-chain trust bypass.</violation>
<violation number="2" location="skills/agent-sort/SKILL.md:86">
P2: Rule installation logic is incomplete: Agent 4 only detects TypeScript and Python, so language-specific rules for other supported stacks are skipped.</violation>
<violation number="3" location="skills/agent-sort/SKILL.md:132">
P2: Rule copy targets are referenced without creating required subdirectories, which can cause rule installation to fail.</violation>
<violation number="4" location="skills/agent-sort/SKILL.md:163">
P2: Hooks configuration is copied unfiltered even though some hook scripts may be skipped/adaptation-required, which can leave `hooks.json` referencing unavailable or incompatible scripts.</violation>
<violation number="5" location="skills/agent-sort/SKILL.md:189">
P2: Rule-count verification uses a non-portable/non-recursive glob pattern and can report incorrect totals.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| If not found, tell the user: | ||
| ``` | ||
| ECC not found. Run: git clone https://github.com/affaan-m/everything-claude-code.git ~/ecc-reference |
There was a problem hiding this comment.
P1: Setup docs instruct cloning external source and installing executable hooks/scripts from it, creating a supply-chain trust bypass.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/agent-sort/SKILL.md, line 30:
<comment>Setup docs instruct cloning external source and installing executable hooks/scripts from it, creating a supply-chain trust bypass.</comment>
<file context>
@@ -0,0 +1,233 @@
+
+If not found, tell the user:
+```
+ECC not found. Run: git clone https://github.com/affaan-m/everything-claude-code.git ~/ecc-reference
+```
+
</file context>
|
|
||
| ### Copy hooks + scripts | ||
| ``` | ||
| {ECC_PATH}/hooks/hooks.json → .claude/hooks/ |
There was a problem hiding this comment.
P2: Hooks configuration is copied unfiltered even though some hook scripts may be skipped/adaptation-required, which can leave hooks.json referencing unavailable or incompatible scripts.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/agent-sort/SKILL.md, line 163:
<comment>Hooks configuration is copied unfiltered even though some hook scripts may be skipped/adaptation-required, which can leave `hooks.json` referencing unavailable or incompatible scripts.</comment>
<file context>
@@ -0,0 +1,233 @@
+
+### Copy hooks + scripts
+```
+{ECC_PATH}/hooks/hooks.json → .claude/hooks/
+{ECC_PATH}/scripts/hooks/*.js → .claude/scripts/hooks/
+{ECC_PATH}/scripts/lib/*.js → .claude/scripts/lib/
</file context>
| ### Create directory structure | ||
| ```bash | ||
| mkdir -p .claude/skills/skill-library/references | ||
| mkdir -p .claude/rules |
There was a problem hiding this comment.
P2: Rule copy targets are referenced without creating required subdirectories, which can cause rule installation to fail.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/agent-sort/SKILL.md, line 132:
<comment>Rule copy targets are referenced without creating required subdirectories, which can cause rule installation to fail.</comment>
<file context>
@@ -0,0 +1,233 @@
+### Create directory structure
+```bash
+mkdir -p .claude/skills/skill-library/references
+mkdir -p .claude/rules
+mkdir -p .claude/hooks
+mkdir -p .claude/scripts/hooks .claude/scripts/lib
</file context>
| ``` | ||
| List all rule files in {ECC_PATH}/rules/ (all subdirectories). | ||
| Check this repo for matching languages: | ||
| - Glob("**/*.ts") → install rules/typescript/ |
There was a problem hiding this comment.
P2: Rule installation logic is incomplete: Agent 4 only detects TypeScript and Python, so language-specific rules for other supported stacks are skipped.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/agent-sort/SKILL.md, line 86:
<comment>Rule installation logic is incomplete: Agent 4 only detects TypeScript and Python, so language-specific rules for other supported stacks are skipped.</comment>
<file context>
@@ -0,0 +1,233 @@
+```
+List all rule files in {ECC_PATH}/rules/ (all subdirectories).
+Check this repo for matching languages:
+- Glob("**/*.ts") → install rules/typescript/
+- Glob("**/*.py") → install rules/python/
+- rules/common/ → always install
</file context>
| ls .claude/skills/skill-library/references/ | wc -l | ||
|
|
||
| # Count rules | ||
| ls .claude/rules/**/*.md | wc -l |
There was a problem hiding this comment.
P2: Rule-count verification uses a non-portable/non-recursive glob pattern and can report incorrect totals.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/agent-sort/SKILL.md, line 189:
<comment>Rule-count verification uses a non-portable/non-recursive glob pattern and can report incorrect totals.</comment>
<file context>
@@ -0,0 +1,233 @@
+ls .claude/skills/skill-library/references/ | wc -l
+
+# Count rules
+ls .claude/rules/**/*.md | wc -l
+
+# Verify hooks
</file context>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
skills/agent-sort/SKILL.md (3)
219-233: Consider validating hardcoded item lists against actual ECC structure.The hardcoded list of always-DAILY items (lines 223-231) creates coupling to specific ECC content. If these items are renamed or removed in future ECC versions, the skill could fail silently or produce incorrect results.
Consider adding validation:
- Check that listed items exist in ECC before assuming they're DAILY
- Generate warnings for missing expected items
- Or derive the list dynamically based on ECC metadata rather than hardcoding
This would make the skill more resilient to ECC structure changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/agent-sort/SKILL.md` around lines 219 - 233, The hardcoded "always-DAILY" lists (the Agents/Skills/Commands/Contexts block containing entries like planner, architect, code-reviewer, coding-standards, tdd-workflow, learn, checkpoint, dev, review) must be validated against the ECC at runtime: update the code that consumes this SKILL.md list to query the ECC metadata for each named Agent/Skill/Command/Context, log a warning for any missing items, and fallback to deriving the DAILY set dynamically from ECC tags/metadata if many expected items are absent; ensure the validation and warning logic references the same identifiers ("Agents", "Skills", "Commands", "Contexts" groups and the individual names) so mismatches are detected rather than silently assumed.
137-149: Consider adding concrete copy command examples.The installation patterns (lines 139-149) use convention notation like "Agents → .claude/skills/{name}/SKILL.md" which may be clear to AI agents but could benefit from concrete examples for human users.
Example: Add command templates
### Copy DAILY items For each DAILY agent: ```bash AGENT_NAME="agent-name" # extracted from agent-name.md mkdir -p ".claude/skills/${AGENT_NAME}" cp "${ECC_PATH}/agents/${AGENT_NAME}.md" ".claude/skills/${AGENT_NAME}/SKILL.md"For each DAILY skill:
SKILL_NAME="skill-name" # extracted from skills/skill-name/ cp -r "${ECC_PATH}/skills/${SKILL_NAME}/" ".claude/skills/${SKILL_NAME}/"</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/agent-sort/SKILL.mdaround lines 137 - 149, The installation section
"Copy DAILY items" and "Copy LIBRARY items" uses abstract conventions (e.g.
Agents → .claude/skills/{name}/SKILL.md and All non-DAILY →
.claude/skills/skill-library/references/{prefix}-{name}.md) but lacks concrete
shell examples; add short, copy/pasteable command templates showing mkdir -p and
cp/cp -r usage for each case (referencing the headings "Copy DAILY items" and
"Copy LIBRARY items" and the example target paths .claude/skills/{name}/SKILL.md
and .claude/skills/skill-library/references/{prefix}-{name}.md) so readers can
directly run the commands to copy Agents, Skills, Commands, Contexts and library
items.</details> --- `39-59`: **Clarify that agent instructions use pseudo-code patterns.** The agent instruction blocks (lines 40-58 and similar blocks for other agents) use pseudo-code notation like `Glob("**/*.ts")` within what appear to be bash code blocks. This could confuse users who might try to execute these literally. Consider either: 1. Adding a note that these are instructions for AI agents, not executable shell commands 2. Providing actual implementation examples alongside the pseudo-code 3. Using a different formatting (e.g., plain text instead of code blocks) to distinguish from executable commands in Steps 1, 5, and 6 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/agent-sort/SKILL.mdaround lines 39 - 59, The agent instruction
blocks use pseudo-code like Glob("/*.ts", "/.tsx") and look like executable
shell commands, which can confuse readers; update the blocks (those containing
Glob(...) and the "Read the first 20 lines..." steps and Steps 1, 5, 6) to
clearly indicate they are AI/agent pseudocode by either (a) adding a one-line
note before each block saying "This is pseudocode for AI agents — not an
executable shell command", (b) switching the code-block formatting to plain text
or a labeled "PSEUDOCODE" block, or (c) include a small real example
implementation alongside the pseudocode (e.g., a shell/glob snippet) so readers
see the distinction; ensure references to Glob("**/.ts", "**/*.py", etc.)
remain unchanged except for the added clarifying text/formatting.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@skills/agent-sort/SKILL.md:
- Around line 151-152: Add a clear specification for the router SKILL.md by
updating the "Create router" section: define the required frontmatter schema
(e.g., title, id, description, version, author), specify the trigger table
format as a Markdown table with columns like "item_id | keywords | description |
path" and exact parsing rules, describe how to extract/generate keywords from
library items (suggest tokenization, stopword removal, and manual overrides via
a "keywords" field in each item), and include a complete, concrete example
SKILL.md that demonstrates frontmatter plus a populated trigger table for
multiple library items; reference the ".claude/skills/skill-library/SKILL.md"
file and the "Create router" heading so reviewers can find and validate the
added schema and example.- Around line 211-217: Replace the unsafe one-liner that conditionally runs rm
-rf on .claude/rules/python/ with a guarded sequence: first check the directory
exists (".claude/rules/python/"), then run the find command but exclude
"./.claude/" and "./node_modules/" and redirect errors to /dev/null to avoid
find failures affecting the test, capture the file count into a variable (e.g.,
PY_COUNT), and only call rm -rf if PY_COUNT equals 0; also echo a
confirmation/log message before removal to make the action explicit. Ensure the
logic references the same path ".claude/rules/python/" and that rm -rf is only
executed after the directory existence and zero-count checks succeeded.- Around line 1-233: The SKILL.md must be restructured to include the required
top-level sections "When to Use", "How It Works", and "Examples": extract the
trigger phrases from the frontmatter description into a concise "When to Use"
list (include phrases like "sort ECC", "set up ECC", "install ECC for this
project", etc.), convert Steps 1–6 into a narrative "How It Works" that explains
locating ECC, launching 6 agents (Agents, Skills, Commands, Rules, Hooks,
Extras), compiling results, installing files, and verification, and add an
"Examples" section with at least one end-to-end example showing a sample
command/flow and the expected outputs (summary counts and success checks)
referencing the Step 5 verification commands so the example demonstrates their
expected results; update any inline procedural fragments to follow prose
sections rather than raw step headings.- Around line 181-197: Replace fragile ls/globstar usage with robust checks: for
the skills count, ensure the .claude/skills directory exists, list only
immediate subdirectories and exclude the "skill-library" entry
(.claude/skills//), and return 0 if none exist; for library references, ensure
.claude/skills/skill-library/references exists and count files there; for rules,
stop using the globstar pattern (.claude/rules/**/.md) and use a recursive file
search limited to *.md under .claude/rules; for hooks keep the existing test for
.claude/hooks/hooks.json but explicitly check the directory exists and handle
missing files gracefully; for scripts ( .claude/scripts/hooks and
.claude/scripts/lib ) ensure directories exist before counting and return zero
if empty — overall replace glob-dependent commands with existence checks and
recursive file searches so the checks are portable and safe.
Nitpick comments:
In@skills/agent-sort/SKILL.md:
- Around line 219-233: The hardcoded "always-DAILY" lists (the
Agents/Skills/Commands/Contexts block containing entries like planner,
architect, code-reviewer, coding-standards, tdd-workflow, learn, checkpoint,
dev, review) must be validated against the ECC at runtime: update the code that
consumes this SKILL.md list to query the ECC metadata for each named
Agent/Skill/Command/Context, log a warning for any missing items, and fallback
to deriving the DAILY set dynamically from ECC tags/metadata if many expected
items are absent; ensure the validation and warning logic references the same
identifiers ("Agents", "Skills", "Commands", "Contexts" groups and the
individual names) so mismatches are detected rather than silently assumed.- Around line 137-149: The installation section "Copy DAILY items" and "Copy
LIBRARY items" uses abstract conventions (e.g. Agents →
.claude/skills/{name}/SKILL.md and All non-DAILY →
.claude/skills/skill-library/references/{prefix}-{name}.md) but lacks concrete
shell examples; add short, copy/pasteable command templates showing mkdir -p and
cp/cp -r usage for each case (referencing the headings "Copy DAILY items" and
"Copy LIBRARY items" and the example target paths .claude/skills/{name}/SKILL.md
and .claude/skills/skill-library/references/{prefix}-{name}.md) so readers can
directly run the commands to copy Agents, Skills, Commands, Contexts and library
items.- Around line 39-59: The agent instruction blocks use pseudo-code like
Glob("/*.ts", "/.tsx") and look like executable shell commands, which can
confuse readers; update the blocks (those containing Glob(...) and the "Read the
first 20 lines..." steps and Steps 1, 5, 6) to clearly indicate they are
AI/agent pseudocode by either (a) adding a one-line note before each block
saying "This is pseudocode for AI agents — not an executable shell command", (b)
switching the code-block formatting to plain text or a labeled "PSEUDOCODE"
block, or (c) include a small real example implementation alongside the
pseudocode (e.g., a shell/glob snippet) so readers see the distinction; ensure
references to Glob("**/.ts", "**/*.py", etc.) remain unchanged except for the
added clarifying text/formatting.</details> <details> <summary>🪄 Autofix (Beta)</summary> Fix all unresolved CodeRabbit comments on this PR: - [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended) - [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes </details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `ef8727be-ca31-433b-9850-a781d65f8f52` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 8b6140dedca32f081ba6964854bb70da31237f5e and 292fd73864913c394830b590f14aca77fe870422. </details> <details> <summary>📒 Files selected for processing (1)</summary> * `skills/agent-sort/SKILL.md` </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| --- | ||
| name: agent-sort | ||
| description: > | ||
| Sort and install ECC for any project. Launches 6 parallel agents that read every | ||
| ECC item, search the actual codebase for evidence, and sort into DAILY vs LIBRARY. | ||
| Use when: "sort ECC", "set up ECC", "install ECC for this project", "cherry pick ECC", | ||
| "configure ECC skills", or when starting ECC setup on a new repo. | ||
| --- | ||
|
|
||
| # Agent-Sort: Automated ECC Cherry-Picker | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| ECC must be cloned locally. Check these locations in order: | ||
| 1. `~/ecc-reference/` | ||
| 2. `/tmp/everything-claude-code/` | ||
| 3. Ask the user for the path if neither exists. | ||
|
|
||
| ## Step 1: Locate ECC | ||
|
|
||
| ```bash | ||
| ECC_PATH="" | ||
| if [ -d ~/ecc-reference/agents ]; then ECC_PATH=~/ecc-reference | ||
| elif [ -d /tmp/everything-claude-code/agents ]; then ECC_PATH=/tmp/everything-claude-code | ||
| fi | ||
| ``` | ||
|
|
||
| If not found, tell the user: | ||
| ``` | ||
| ECC not found. Run: git clone https://github.com/affaan-m/everything-claude-code.git ~/ecc-reference | ||
| ``` | ||
|
|
||
| ## Step 2: Launch 6 Parallel Agents | ||
|
|
||
| Launch ALL 6 agents simultaneously using the Agent tool. Each agent reads every ECC item in its category and searches THIS repo for matching languages, frameworks, imports, and file types. | ||
|
|
||
| ### Agent 1 — AGENTS (reads ~/ecc-reference/agents/*.md) | ||
|
|
||
| ``` | ||
| Read the first 20 lines of each agent .md file in {ECC_PATH}/agents/. | ||
| For each, search THIS repo for matching languages, frameworks, imports, file extensions. | ||
|
|
||
| Evidence checks: | ||
| - Glob("**/*.ts", "**/*.tsx") for TypeScript | ||
| - Glob("**/*.py") for Python | ||
| - Glob("**/*.go", "**/go.mod") for Go | ||
| - Glob("**/*.rs", "**/Cargo.toml") for Rust | ||
| - Check package.json for framework dependencies | ||
|
|
||
| Output format: | ||
| DAILY: | ||
| - agent-name.md | one-line evidence from repo | ||
|
|
||
| LIBRARY: | ||
| - agent-name.md | reason | ||
|
|
||
| SKIP: | ||
| - agent-name.md | reason | ||
| ``` | ||
|
|
||
| ### Agent 2 — SKILLS (reads ~/ecc-reference/skills/*/SKILL.md) | ||
|
|
||
| ``` | ||
| Read the SKILL.md frontmatter (first 15 lines) in each {ECC_PATH}/skills/*/ folder. | ||
| For each, search THIS repo for matching patterns, dependencies, and file structure. | ||
|
|
||
| Output: DAILY / LIBRARY / SKIP with evidence. | ||
| ``` | ||
|
|
||
| ### Agent 3 — COMMANDS (reads ~/ecc-reference/commands/*.md) | ||
|
|
||
| ``` | ||
| Read the first 15 lines of each {ECC_PATH}/commands/*.md. | ||
| - Language-specific commands (/go-review, /rust-build) → match to project languages | ||
| - General dev commands (/plan, /verify, /tdd) → DAILY for any project | ||
| - Meta-tool commands (/instinct-status, /evolve) → LIBRARY | ||
|
|
||
| Output: DAILY / LIBRARY / SKIP with evidence. | ||
| ``` | ||
|
|
||
| ### Agent 4 — RULES (reads ~/ecc-reference/rules/**/*.md) | ||
|
|
||
| ``` | ||
| List all rule files in {ECC_PATH}/rules/ (all subdirectories). | ||
| Check this repo for matching languages: | ||
| - Glob("**/*.ts") → install rules/typescript/ | ||
| - Glob("**/*.py") → install rules/python/ | ||
| - rules/common/ → always install | ||
|
|
||
| Check if .claude/rules/ already exists. Flag duplicates. | ||
|
|
||
| Output: INSTALL / SKIP per file. | ||
| ``` | ||
|
|
||
| ### Agent 5 — HOOKS (reads ~/ecc-reference/hooks/) | ||
|
|
||
| ``` | ||
| Read {ECC_PATH}/hooks/hooks.json and {ECC_PATH}/hooks/README.md. | ||
| List all scripts in {ECC_PATH}/scripts/hooks/ and {ECC_PATH}/scripts/lib/. | ||
| Check this repo for: | ||
| - Prettier config (.prettierrc, prettier in package.json) | ||
| - TypeScript (tsconfig.json) | ||
| - ESLint config | ||
| - OS: flag tmux hooks on Windows, osascript hooks on non-macOS | ||
|
|
||
| Output: INSTALL / SKIP / NEEDS-ADAPTATION per item. | ||
| ``` | ||
|
|
||
| ### Agent 6 — EXTRAS (contexts, guides, configs) | ||
|
|
||
| ``` | ||
| Read {ECC_PATH}/contexts/*.md → always DAILY (lightweight mode-switchers). | ||
| Check {ECC_PATH}/mcp-configs/, examples/, guides → LIBRARY-REFERENCE. | ||
| Check .agents/, docs/, tests/ → SKIP (ECC internals). | ||
|
|
||
| Output: DAILY / LIBRARY-REFERENCE / SKIP per item. | ||
| ``` | ||
|
|
||
| ## Step 3: Compile Results | ||
|
|
||
| After all 6 agents return, combine results into a single sorted list: | ||
| - DAILY items (typically ~50) | ||
| - LIBRARY items (typically ~170) | ||
| - INSTALL rules (typically ~14 for one language) | ||
| - INSTALL hooks/scripts (typically ~40) | ||
|
|
||
| ## Step 4: Install | ||
|
|
||
| ### Create directory structure | ||
| ```bash | ||
| mkdir -p .claude/skills/skill-library/references | ||
| mkdir -p .claude/rules | ||
| mkdir -p .claude/hooks | ||
| mkdir -p .claude/scripts/hooks .claude/scripts/lib | ||
| ``` | ||
|
|
||
| ### Copy DAILY items | ||
| ``` | ||
| Agents → .claude/skills/{name}/SKILL.md | ||
| Skills → .claude/skills/{name}/ (cp -r, keeps references/) | ||
| Commands → .claude/skills/cmd-{name}/SKILL.md | ||
| Contexts → .claude/skills/context-{name}/SKILL.md | ||
| ``` | ||
|
|
||
| ### Copy LIBRARY items | ||
| ``` | ||
| All non-DAILY → .claude/skills/skill-library/references/{prefix}-{name}.md | ||
| Prefix: agent-, skill-, cmd- to avoid name collisions | ||
| ``` | ||
|
|
||
| ### Create router | ||
| Create `.claude/skills/skill-library/SKILL.md` with a trigger table listing every library item with keywords that would activate it. This is the ONLY way Claude finds library items. | ||
|
|
||
| ### Copy rules | ||
| Only matching languages + common/: | ||
| ``` | ||
| {ECC_PATH}/rules/common/*.md → .claude/rules/common/ | ||
| {ECC_PATH}/rules/{language}/*.md → .claude/rules/{language}/ | ||
| ``` | ||
|
|
||
| ### Copy hooks + scripts | ||
| ``` | ||
| {ECC_PATH}/hooks/hooks.json → .claude/hooks/ | ||
| {ECC_PATH}/scripts/hooks/*.js → .claude/scripts/hooks/ | ||
| {ECC_PATH}/scripts/lib/*.js → .claude/scripts/lib/ | ||
| {ECC_PATH}/scripts/setup-package-manager.js → .claude/scripts/ | ||
| ``` | ||
|
|
||
| Skip any items marked NEEDS-ADAPTATION with a note to the user. | ||
|
|
||
| ### Copy guides as references | ||
| ``` | ||
| {ECC_PATH}/the-shortform-guide.md → .claude/skills/skill-library/references/ | ||
| {ECC_PATH}/the-longform-guide.md → .claude/skills/skill-library/references/ | ||
| {ECC_PATH}/the-security-guide.md → .claude/skills/skill-library/references/ | ||
| ``` | ||
|
|
||
| ## Step 5: Verify | ||
|
|
||
| Check every item landed on disk: | ||
| ```bash | ||
| # Count daily skills (exclude skill-library) | ||
| ls -d .claude/skills/*/ | grep -v skill-library | wc -l | ||
|
|
||
| # Count library references | ||
| ls .claude/skills/skill-library/references/ | wc -l | ||
|
|
||
| # Count rules | ||
| ls .claude/rules/**/*.md | wc -l | ||
|
|
||
| # Verify hooks | ||
| test -f .claude/hooks/hooks.json && echo "✓" || echo "✗" | ||
|
|
||
| # Count scripts | ||
| ls .claude/scripts/hooks/ | wc -l | ||
| ls .claude/scripts/lib/ | wc -l | ||
| ``` | ||
|
|
||
| Print summary: | ||
| ``` | ||
| Daily skills: XX folders | ||
| Library refs: XX files | ||
| Router: ✓/✗ | ||
| Rules: XX files (common + {language}) | ||
| Hook scripts: XX files | ||
| Lib scripts: XX files | ||
| Total files: XX | ||
| Token cost: ~X,XXX always loaded | ||
| ``` | ||
|
|
||
| ## Step 6: Clean up stale rules | ||
|
|
||
| If `.claude/rules/` has language directories for languages NOT found in the repo, remove them: | ||
| ```bash | ||
| # Example: if no Python files exist | ||
| test $(find . -name "*.py" -not -path "./node_modules/*" | wc -l) -eq 0 && rm -rf .claude/rules/python/ | ||
| ``` | ||
|
|
||
| ## What stays the same across every project | ||
|
|
||
| These are always DAILY regardless of stack: | ||
| ``` | ||
| Agents: planner, architect, code-reviewer, security-reviewer, | ||
| build-error-resolver, refactor-cleaner, tdd-guide, | ||
| docs-lookup, doc-updater | ||
| Skills: coding-standards, tdd-workflow, security-review, security-scan, | ||
| continuous-learning, strategic-compact, verification-loop | ||
| Commands: learn, checkpoint, docs, aside, plan, verify, save-session, | ||
| resume-session, quality-gate, build-fix, code-review, refactor-clean | ||
| Contexts: dev, review, research | ||
| ``` | ||
|
|
||
| ~35 items always daily. The remaining ~15 daily items come from matching the project's stack. |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Restructure to include required sections: 'When to Use', 'How It Works', and 'Examples'.
The skill file must follow the standard format with clearly defined sections. Currently, the procedural steps are present but not organized according to the required structure. As per coding guidelines, skills must be formatted with clear sections including 'When to Use', 'How It Works', and 'Examples'.
Suggested structure:
- When to Use: Extract trigger phrases from the frontmatter description and expand with use cases
- How It Works: Convert Steps 1-6 into a narrative explanation of the workflow
- Examples: Add at least one complete example showing the skill in action with expected output
</review_comment_end>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/agent-sort/SKILL.md` around lines 1 - 233, The SKILL.md must be
restructured to include the required top-level sections "When to Use", "How It
Works", and "Examples": extract the trigger phrases from the frontmatter
description into a concise "When to Use" list (include phrases like "sort ECC",
"set up ECC", "install ECC for this project", etc.), convert Steps 1–6 into a
narrative "How It Works" that explains locating ECC, launching 6 agents (Agents,
Skills, Commands, Rules, Hooks, Extras), compiling results, installing files,
and verification, and add an "Examples" section with at least one end-to-end
example showing a sample command/flow and the expected outputs (summary counts
and success checks) referencing the Step 5 verification commands so the example
demonstrates their expected results; update any inline procedural fragments to
follow prose sections rather than raw step headings.
| ### Create router | ||
| Create `.claude/skills/skill-library/SKILL.md` with a trigger table listing every library item with keywords that would activate it. This is the ONLY way Claude finds library items. |
There was a problem hiding this comment.
Specify the router SKILL.md format and structure.
Line 152 states that creating the router file is "the ONLY way Claude finds library items," but provides no details on its format, structure, or content. Users need:
- The exact frontmatter schema required
- The trigger table format (markdown table? list? YAML?)
- How to extract/generate keywords from library items
- A complete example of a valid router file
Would you like me to draft a specification or example for the router SKILL.md format?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/agent-sort/SKILL.md` around lines 151 - 152, Add a clear specification
for the router SKILL.md by updating the "Create router" section: define the
required frontmatter schema (e.g., title, id, description, version, author),
specify the trigger table format as a Markdown table with columns like "item_id
| keywords | description | path" and exact parsing rules, describe how to
extract/generate keywords from library items (suggest tokenization, stopword
removal, and manual overrides via a "keywords" field in each item), and include
a complete, concrete example SKILL.md that demonstrates frontmatter plus a
populated trigger table for multiple library items; reference the
".claude/skills/skill-library/SKILL.md" file and the "Create router" heading so
reviewers can find and validate the added schema and example.
| ```bash | ||
| # Count daily skills (exclude skill-library) | ||
| ls -d .claude/skills/*/ | grep -v skill-library | wc -l | ||
|
|
||
| # Count library references | ||
| ls .claude/skills/skill-library/references/ | wc -l | ||
|
|
||
| # Count rules | ||
| ls .claude/rules/**/*.md | wc -l | ||
|
|
||
| # Verify hooks | ||
| test -f .claude/hooks/hooks.json && echo "✓" || echo "✗" | ||
|
|
||
| # Count scripts | ||
| ls .claude/scripts/hooks/ | wc -l | ||
| ls .claude/scripts/lib/ | wc -l | ||
| ``` |
There was a problem hiding this comment.
Fix shell commands to handle edge cases and ensure portability.
The verification commands have several issues:
- Line 183:
ls -d .claude/skills/*/will error if no directories exist - Line 189:
ls .claude/rules/**/*.mdrelies on bash globstar expansion which may not be enabled; usefindinstead - Missing error handling throughout
🔧 Proposed fixes for robust verification
# Count daily skills (exclude skill-library)
-ls -d .claude/skills/*/ | grep -v skill-library | wc -l
+find .claude/skills/ -mindepth 1 -maxdepth 1 -type d ! -name skill-library | wc -l
# Count library references
-ls .claude/skills/skill-library/references/ | wc -l
+find .claude/skills/skill-library/references/ -type f 2>/dev/null | wc -l
# Count rules
-ls .claude/rules/**/*.md | wc -l
+find .claude/rules/ -name "*.md" -type f | wc -l
# Verify hooks
test -f .claude/hooks/hooks.json && echo "✓" || echo "✗"
# Count scripts
-ls .claude/scripts/hooks/ | wc -l
-ls .claude/scripts/lib/ | wc -l
+find .claude/scripts/hooks/ -type f 2>/dev/null | wc -l
+find .claude/scripts/lib/ -type f 2>/dev/null | wc -l🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/agent-sort/SKILL.md` around lines 181 - 197, Replace fragile
ls/globstar usage with robust checks: for the skills count, ensure the
.claude/skills directory exists, list only immediate subdirectories and exclude
the "skill-library" entry (.claude/skills/*/), and return 0 if none exist; for
library references, ensure .claude/skills/skill-library/references exists and
count files there; for rules, stop using the globstar pattern
(.claude/rules/**/*.md) and use a recursive file search limited to *.md under
.claude/rules; for hooks keep the existing test for .claude/hooks/hooks.json but
explicitly check the directory exists and handle missing files gracefully; for
scripts ( .claude/scripts/hooks and .claude/scripts/lib ) ensure directories
exist before counting and return zero if empty — overall replace glob-dependent
commands with existence checks and recursive file searches so the checks are
portable and safe.
| ## Step 6: Clean up stale rules | ||
|
|
||
| If `.claude/rules/` has language directories for languages NOT found in the repo, remove them: | ||
| ```bash | ||
| # Example: if no Python files exist | ||
| test $(find . -name "*.py" -not -path "./node_modules/*" | wc -l) -eq 0 && rm -rf .claude/rules/python/ | ||
| ``` |
There was a problem hiding this comment.
Add safeguards to prevent unintended file deletion.
The cleanup command at line 216 has critical safety issues:
- If the
findcommand fails, the test may behave unexpectedly - No validation that the target path is within
.claude/rules/ - No existence check before
rm -rf - No error handling or confirmation prompt
🛡️ Proposed safer implementation
## Step 6: Clean up stale rules
If `.claude/rules/` has language directories for languages NOT found in the repo, remove them:
```bash
# Example: if no Python files exist
-test $(find . -name "*.py" -not -path "./node_modules/*" | wc -l) -eq 0 && rm -rf .claude/rules/python/
+if [ -d .claude/rules/python/ ]; then
+ PY_COUNT=$(find . -name "*.py" -not -path "./node_modules/*" -not -path "./.claude/*" 2>/dev/null | wc -l)
+ if [ "$PY_COUNT" -eq 0 ]; then
+ echo "No Python files found, removing .claude/rules/python/"
+ rm -rf .claude/rules/python/
+ fi
+fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/agent-sort/SKILL.md` around lines 211 - 217, Replace the unsafe
one-liner that conditionally runs rm -rf on .claude/rules/python/ with a guarded
sequence: first check the directory exists (".claude/rules/python/"), then run
the find command but exclude "./.claude/*" and "./node_modules/*" and redirect
errors to /dev/null to avoid find failures affecting the test, capture the file
count into a variable (e.g., PY_COUNT), and only call rm -rf if PY_COUNT equals
0; also echo a confirmation/log message before removal to make the action
explicit. Ensure the logic references the same path ".claude/rules/python/" and
that rm -rf is only executed after the directory existence and zero-count checks
succeeded.
Summary
Adds a new
skills/agent-sort/skill that automates ECC cherry-picking for any project using 6 parallel agents.Related to #916 — I opened the issue and have a production-tested implementation.
How it works
The 6 agents
Why this approach
configure-eccis interactive — this is fully automatedinstall-plan.jsworks at module level — this works at per-skill levelTested on
React Native + TypeScript + Supabase production app. Sorted 291 items: 51 daily, 168 library, 14 rules, 39 hooks/scripts. Token cost: ~5,100 always loaded (2.6% of 200K).
Test plan
Summary by cubic
Adds the
skills/agent-sortskill to automate evidence-based ECC setup for any repo. It scans your codebase, sorts ECC items into DAILY vs LIBRARY, and installs the right skills, rules, and hooks in one pass.New Features
.claude/skills/,.claude/rules/,.claude/hooks/, and.claude/scripts/..claude/skills/skill-library/SKILL.mdwith trigger keywords.~/ecc-referenceor/tmp/everything-claude-code; prompts for a path if missing.Migration
git clone https://github.com/affaan-m/everything-claude-code.git ~/ecc-reference..claude/.Written for commit 292fd73. Summary will update on new commits.
Summary by CodeRabbit