Skip to content

Commit f82d644

Browse files
committed
style: fix isort and ruff issues in CLI files and tests
1 parent 8080f79 commit f82d644

File tree

19 files changed

+1549
-76
lines changed

19 files changed

+1549
-76
lines changed
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
---
2+
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr list:*), Bash(git:*), Bash(python:*)
3+
description: AgentReady-specific code review with attribute mapping and score impact analysis
4+
disable-model-invocation: false
5+
---
6+
7+
Provide an AgentReady-specific code review for the given pull request.
8+
9+
This command extends the standard `/code-review` with agentready project-specific concerns:
10+
- Map findings to the 25 agentready attributes (from agent-ready-codebase-attributes.md)
11+
- Calculate impact on self-assessment score (current: 80.0/100 Gold)
12+
- Generate remediation commands (agentready bootstrap, black, pytest, etc.)
13+
- Link to relevant CLAUDE.md sections
14+
- Categorize by severity with confidence scoring
15+
16+
## Process
17+
18+
Follow these steps precisely:
19+
20+
1. **Eligibility Check** (Haiku agent)
21+
- Check if PR is closed, draft, or already reviewed
22+
- Skip if automated PR (dependabot, renovate) unless it touches assessors
23+
- If ineligible, exit early
24+
25+
2. **Context Gathering** (Haiku agent)
26+
- List all CLAUDE.md files (root + modified directories)
27+
- Get full PR diff and metadata
28+
- Return concise summary of changes
29+
30+
3. **Parallel AgentReady-Focused Review** (5 Sonnet agents)
31+
Launch 5 parallel agents to independently review:
32+
33+
**Agent #1: CLAUDE.md Compliance Audit**
34+
- Check adherence to CLAUDE.md development workflows
35+
- Verify pre-push linting (black, isort, ruff)
36+
- Verify test requirements (pytest, >80% coverage)
37+
- Check branch verification, conventional commits
38+
39+
**Agent #2: AgentReady-Specific Bug Scan**
40+
Focus on agentready assessment logic:
41+
- TOCTOU bugs in file system operations
42+
- AST parsing correctness (false positives/negatives in assessors)
43+
- Measurement accuracy issues
44+
- Type annotation correctness
45+
- Error handling patterns (try-except, graceful degradation)
46+
47+
**Agent #3: Historical Context Analysis**
48+
- Read git blame for modified assessor files
49+
- Check for regression in assessment accuracy
50+
- Verify attribute scoring logic hasn't changed unintentionally
51+
52+
**Agent #4: Previous PR Comment Analysis**
53+
- Review comments on past PRs touching same files
54+
- Check for recurring issues
55+
56+
**Agent #5: Code Comment Compliance**
57+
- Verify changes follow inline comment guidance
58+
59+
4. **Attribute Mapping** (Haiku agent for each issue)
60+
For each issue found:
61+
- Map to specific agentready attribute ID (e.g., "2.3 Type Annotations")
62+
- Determine tier (1=Essential, 2=Critical, 3=Important, 4=Advanced)
63+
- Calculate score impact using tier weights (Tier 1: 50%, Tier 2: 30%, etc.)
64+
- Generate remediation command (black, pytest, agentready bootstrap --fix)
65+
- Link to CLAUDE.md section
66+
67+
5. **Confidence Scoring** (parallel Haiku agents)
68+
For each issue, score 0-100 confidence:
69+
- 0: False positive
70+
- 25: Might be real, unverified
71+
- 50: Verified but minor
72+
- 75: Very likely real, important
73+
- 90: Critical issue (auto-fix candidate)
74+
- 100: Blocker (definitely auto-fix)
75+
76+
**Critical Issue Criteria** (confidence ≥90):
77+
- Security vulnerabilities (path traversal, injection)
78+
- TOCTOU race conditions
79+
- Assessment accuracy bugs (false positives/negatives)
80+
- Type safety violations causing runtime errors
81+
- Missing error handling leading to crashes
82+
83+
6. **Filter Issues**
84+
- Keep issues with confidence ≥80 for reporting
85+
- Flag issues with confidence ≥90 as "auto-fix candidates"
86+
- Calculate aggregate score impact
87+
88+
7. **Final Eligibility Check** (Haiku agent)
89+
- Verify PR is still eligible for review
90+
- Check if PR was updated during review
91+
92+
8. **Post Review Comment** (using gh pr comment)
93+
Use the custom AgentReady format (see below)
94+
95+
## AgentReady Review Output Format
96+
97+
Use this format precisely:
98+
99+
---
100+
101+
### 🤖 AgentReady Code Review
102+
103+
**PR Status**: [X issues found] ([Y 🔴 Critical], [Z 🟡 Major], [W 🔵 Minor])
104+
**Score Impact**: Current 80.0/100 → [calculated score] if all issues fixed
105+
**Certification**: Gold → [Platinum/Gold/Silver/Bronze] potential
106+
107+
---
108+
109+
#### 🔴 Critical Issues (Confidence ≥90) - Auto-Fix Recommended
110+
111+
##### 1. [Brief description]
112+
**Attribute**: [ID Name] (Tier [N]) - [Link to CLAUDE.md section]
113+
**Confidence**: [90-100]%
114+
**Score Impact**: [−X.X points]
115+
**Location**: [GitHub permalink with full SHA]
116+
117+
**Issue Details**:
118+
[Concise explanation with code snippet if relevant]
119+
120+
**Remediation**:
121+
```bash
122+
# Automated fix available via:
123+
# (Will be applied automatically if this is a blocker/critical)
124+
[specific command: black file.py, pytest tests/test_foo.py, etc.]
125+
```
126+
127+
---
128+
129+
#### 🟡 Major Issues (Confidence 80-89) - Manual Review Required
130+
131+
##### 2. [Brief description]
132+
**Attribute**: [ID Name] (Tier [N])
133+
**Confidence**: [80-89]%
134+
**Score Impact**: [−X.X points]
135+
**Location**: [GitHub permalink]
136+
137+
[Details and remediation as above]
138+
139+
---
140+
141+
#### Summary
142+
143+
- **Auto-Fix Candidates**: [N critical issues] flagged for automatic resolution
144+
- **Manual Review**: [M major issues] require human judgment
145+
- **Total Score Improvement Potential**: +[X.X points] if all issues addressed
146+
- **AgentReady Assessment**: Run `agentready assess .` after fixes to verify score
147+
148+
---
149+
150+
🤖 Generated with [Claude Code](https://claude.ai/code)
151+
152+
<sub>- If this review was useful, react with 👍. Otherwise, react with 👎.</sub>
153+
154+
---
155+
156+
## False Positive Examples
157+
158+
Avoid flagging:
159+
- Pre-existing issues (not in PR diff)
160+
- Intentional functionality changes
161+
- Issues caught by CI (linters, tests, type checkers)
162+
- Pedantic style issues not in CLAUDE.md
163+
- General code quality unless explicitly in CLAUDE.md
164+
- Issues with lint-ignore comments
165+
- Code on unmodified lines
166+
167+
## AgentReady-Specific Focus Areas
168+
169+
**High Priority**:
170+
1. File system race conditions (TOCTOU in scanning logic)
171+
2. Assessment accuracy (false positives/negatives)
172+
3. Type annotations (Python 3.11+ compatibility)
173+
4. Error handling (graceful degradation)
174+
5. Test coverage (>80% for new code)
175+
176+
**Medium Priority**:
177+
6. CLAUDE.md workflow compliance
178+
7. Conventional commit format
179+
8. Documentation updates (docstrings, CLAUDE.md)
180+
181+
**Low Priority**:
182+
9. Performance optimization
183+
10. Code organization/structure
184+
185+
## Notes
186+
187+
- Use `gh` CLI for all GitHub operations
188+
- Always link to code with full SHA (not `HEAD` or branch names)
189+
- Link format: `https://github.com/owner/repo/blob/[full-sha]/path#L[start]-L[end]`
190+
- Provide ≥1 line context before/after issue location
191+
- Make todo list first
192+
- Calculate score impact using tier-based weighting from agentready scoring algorithm

0 commit comments

Comments
 (0)