Skip to content

Commit 5e71993

Browse files
committed
chore: update review commands for clarity and structure
- Renamed the review command files to include a personal-todo identifier for better context. - Streamlined the execution steps in `review-implement.md` and `review-validate.md`, enhancing clarity on the processes for implementing findings and validating specs. - Improved the output format in `review.md` to ensure consistency and better readability of findings and validation results. These updates aim to enhance the usability of review commands and improve documentation clarity for developers.
1 parent 6bef486 commit 5e71993

File tree

3 files changed

+150
-620
lines changed

3 files changed

+150
-620
lines changed
Lines changed: 47 additions & 161 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Review Implement Command
1+
# Review Implement Command (personal-todo)
22

33
Read a validated review report and implement actionable findings, prioritizing Critical and High severity items with clear implementation paths.
44

@@ -14,178 +14,64 @@ $ARGUMENTS
1414

1515
After a review has been validated via `/review-validate`, implement the remaining actionable findings from the review report. Focus on Critical and High priority items that have clear implementation paths, and update the review file so it reflects the current state using the same format as `.cursor/commands/review.md`.
1616

17-
**Note**: This command can be used alongside manual fixes. The author may choose to:
18-
- Fix issues manually based on the review file
19-
- Use this command to automate implementation
20-
- Use both approaches (manual fixes + automated implementation)
17+
**Note**: This command can be used alongside manual fixes. The author may choose to fix issues manually or use this command to automate implementation.
2118

2219
## Execution Steps
2320

2421
1. **Select Target Review File**
25-
- If the user supplies a path (e.g., `--review reviews/REVIEW_branch.md`), use it.
26-
- Otherwise, pick the most recent `reviews/REVIEW_*.md` file.
27-
- **Hard error**: If no review file found, fail with clear error message.
28-
29-
2. **Parse Review File**
30-
- Read the review file and extract:
31-
- Review Summary (baseline, decision, risk)
32-
- Findings grouped by severity (Critical, High, Medium, Low)
33-
- Each finding's ID, file:line, description, and change recommendation
34-
- Metrics Summary (to track progress)
35-
- Skip findings marked as "None." or empty sections.
36-
- **Note**: Only implement findings that have clear "Change:" recommendations with actionable code paths.
37-
38-
3. **Filter Actionable Findings**
39-
- **Include**:
40-
- Critical and High severity findings with `file:line` references
41-
- Findings with explicit "Change:" recommendations that include code examples or clear implementation steps
42-
- Findings that don't require external clarification (no "NEEDS CLARIFICATION" markers)
43-
- **Re-validate**: If a finding is no longer applicable, remove it from the review file and update the summary/metrics.
44-
- **Exclude**:
45-
- Findings marked as "Out of scope" or "Deferred"
46-
- Findings requiring user input or clarification
47-
- Findings without clear implementation guidance
48-
- Low priority findings (unless user explicitly requests via `--include-low`)
49-
- **User override**: If `$ARGUMENTS` contains specific finding IDs (e.g., `--findings Spec-1,Quality-2`), implement only those.
50-
51-
4. **Prioritize Implementation Order**
52-
- **Phase 1**: Critical findings (security, correctness, spec compliance)
53-
- **Phase 2**: High findings (maintainability, performance, quality)
54-
- **Phase 3**: Medium findings (if `--include-medium` flag provided)
55-
- **Within each phase**: Order by file dependencies (read files before modifying them)
56-
57-
5. **Implement Each Finding**
58-
- For each actionable finding:
59-
- **Read target file**: Load the file referenced in `file:line`
60-
- **Understand context**: Read surrounding code (at least 10 lines before/after the target line)
61-
- **Apply change**: Implement the recommendation from the "Change:" section
62-
- **Validate change**: Run any validation commands specified in the finding's "Validate:" clause
63-
- **Update review**: Remove the resolved finding from the report and refresh summary/metrics to reflect the new state
64-
- **Error handling**: If a finding cannot be implemented (file missing, unclear recommendation, validation fails):
65-
- Log error with finding ID
66-
- Add note under **Questions for Author** describing why it could not be implemented
67-
- Continue with next finding
68-
69-
6. **Update Review File**
70-
- After implementing findings, update the review file in place:
71-
- Keep the same section order and formatting as `.cursor/commands/review.md`
72-
- Remove resolved findings and update **Key Recommendations** to match remaining issues
73-
- Update **Review Summary** and **Metrics Summary** to reflect the current state
74-
- Use **Questions for Author** for blocked items or missing clarification
75-
- If no findings remain, set Decision to approve and Key Recommendations to “None”
76-
- Do not add extra sections beyond the review template
77-
78-
7. **Run Validation**
79-
- After all implementations:
80-
- Run spec validation: `uv run python scripts/check_docs.py` (if spec files were modified)
81-
- Run linting: `uv run ruff check` on modified files (if applicable)
82-
- Run tests: Execute any test files mentioned in findings' "Validate:" clauses (e.g., `uv run pytest path/to/test.py`)
83-
- **Report failures**: If validation fails, add notes to review file but don't rollback changes
84-
85-
8. **Generate Summary**
86-
- Output summary of:
87-
- Number of findings implemented successfully
88-
- Number of findings skipped (with reasons)
89-
- Number of findings that failed implementation
90-
- Files modified
91-
- Next steps (if any findings remain)
92-
93-
## Implementation Rules
94-
95-
- **Preserve existing code**: Only modify code as specified in the finding's "Change:" recommendation
96-
- **Follow project conventions**: Use existing code style, import patterns, and naming conventions
97-
- **Add tests**: If a finding requires test coverage, implement tests alongside the fix
98-
- **Atomic commits**: Consider grouping related findings by file, but don't force atomicity if findings are independent
99-
- **Documentation**: Update docstrings/comments if the change affects public APIs
100-
- **No over-engineering**: Implement exactly what the finding recommends, no more
101-
102-
## Finding Format Parsing
103-
104-
Parse findings using the same format as `/review`:
105-
106-
```
107-
[ID][Severity][Tag] file:line – description
108-
109-
**Current Implementation:** (code snippet)
110-
111-
**Problem:** explanation
112-
113-
**Change:** recommendation with code example
114-
115-
**Validate:** test file or command
116-
```
117-
118-
Extract:
119-
- Finding ID (e.g., `Spec-1`, `Quality-2`)
120-
- Severity (Critical, High, Medium, Low)
121-
- File path and line number
122-
- Change recommendation (code example or clear steps)
123-
- Validation command (if provided)
22+
- If the user supplies a path, use it.
23+
- Otherwise, pick the most recent `reviews/REVIEW_*.md` file. **Hard error**: If no review file found, fail with clear error message.
24+
25+
2. **Parse & Filter Findings**
26+
- Read the review file and extract: Review Summary, Metrics, and Findings grouped by severity.
27+
- **Include**: Critical and High severity findings with `file:line` references and explicit "Change:" recommendations.
28+
- **Exclude**: Findings marked as "Out of scope", "Deferred", or requiring external clarification ("NEEDS CLARIFICATION").
29+
- **User Override**: If `$ARGUMENTS` contains specific finding IDs (e.g., `--findings Spec-1`), implement only those.
30+
31+
3. **Prioritize & Implement**
32+
- **Order**: Phase 1 (Critical: security, correctness), Phase 2 (High: maintainability, performance), Phase 3 (Medium: if requested).
33+
- **Execution**: For each actionable finding:
34+
- Read target file + context (10 lines before/after).
35+
- Apply change exactly as recommended (No over-engineering).
36+
- Follow project conventions (FastAPI, SQLAlchemy, Pydantic).
37+
- Validate using finding's "Validate:" clause.
38+
- Update review file immediately to track progress.
39+
40+
4. **Run Global Validation**
41+
- After implementations:
42+
- Run spec validation: `uv run python scripts/check_docs.py` (if specs touched).
43+
- Run linting: `uv run ruff check` on modified files.
44+
- Run tests: Execute relevant `pytest` commands.
45+
46+
5. **Generate Summary**
47+
- Output summary of implemented findings, skipped findings, and modified files.
48+
49+
## Implementation Directives
50+
51+
**Directive I: Standard Adherence**
52+
- **Follow Conventions**: Use existing code style, absolute imports, and naming conventions.
53+
- **Preserve Behavior**: Only modify code explicitly specified in the "Change:" recommendation.
54+
- **No Over-Engineering**: Implement exactly what is recommended, no more.
55+
56+
**Directive II: Incremental Verification**
57+
- **Atomic Progress**: Remove resolved findings immediately after successful implementation.
58+
- **Verification**: Run all validation commands mentioned in the finding's "Validate:" clause.
59+
- **Documentation**: Update docstrings/comments if the change affects public APIs.
12460

12561
## Output Format
12662

12763
After implementation, display:
128-
129-
```
64+
```markdown
13065
## Implementation Summary
131-
132-
✅ Implemented: N findings
133-
- [Spec-1] file.py:42 - Description
134-
- [Quality-2] file.py:100 - Description
135-
136-
⏭️ Skipped: M findings
137-
- [Spec-3] - Requires clarification
138-
- [Quality-4] - Out of scope
139-
140-
❌ Failed: K findings
141-
- [Spec-5] file.py:200 - Error: file not found
142-
143-
📝 Files Modified:
144-
- file1.py
145-
- file2.py
146-
147-
🔍 Validation:
148-
- Spec validation: ✅ Passed
149-
- Linting: ✅ Passed
150-
- Tests: ⚠️ 2 failures (see review file notes)
151-
```
152-
153-
## Error Handling
154-
155-
- **File not found**: Log error, skip finding, add note to review file
156-
- **Unclear recommendation**: Log warning, skip finding, add note requesting clarification
157-
- **Validation failure**: Log error, add note to review file, continue with other findings
158-
- **Syntax error**: Rollback the specific change, log error, add note to review file
159-
160-
## Behavior Rules
161-
162-
- **Read-only review updates**: Only update the review file to track progress; don't modify other files unless implementing a finding
163-
- **Respect user intent**: If user specifies specific findings via `$ARGUMENTS`, only implement those
164-
- **Preserve review structure**: Keep the original review format and structure from `.cursor/commands/review.md`
165-
- **Incremental progress**: Remove resolved findings immediately after successful implementation
166-
- **Clear communication**: Provide clear error messages explaining why a finding couldn't be implemented
167-
168-
## Example Usage
169-
170-
```bash
171-
# Implement all actionable findings from most recent review
172-
/review-implement
173-
174-
# Implement findings from specific review file
175-
/review-implement --review reviews/REVIEW_c-spec-overhaul.md
176-
177-
# Implement only specific findings
178-
/review-implement --findings Spec-1,Quality-2,Security-3
179-
180-
# Include medium priority findings
181-
/review-implement --include-medium
182-
183-
# Include low priority findings
184-
/review-implement --include-low
66+
✅ Implemented: N findings ([Spec-1], [Quality-2])
67+
⏭️ Skipped: M findings (Reason)
68+
❌ Failed: K findings (Error: message)
69+
📝 Files Modified: file1.py, file2.py
70+
🔍 Validation: Spec [Passed], Linting [Passed], Tests [Failed (Notes)]
18571
```
18672

18773
## Integration with Review Workflow
18874

189-
This command is used after `/post-review` when author chooses to automate fixes. It can be used alongside manual fixes.
75+
This command is used after `/post-review` when author chooses to automate fixes.
19076

19177
Context: $ARGUMENTS

.cursor/commands/review-validate.md

Lines changed: 13 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Review Validate Command
1+
# Review Validate Command (personal-todo)
22

33
Validate current specs after a review and refresh the existing review report to match the current state.
44

@@ -20,39 +20,33 @@ After a review report exists in `reviews/`, re-check the current state, validate
2020
- If the user supplies a path, use it.
2121
- Otherwise, pick the most recent `reviews/REVIEW_*.md`.
2222
2. **Capture Baseline**
23+
- **Ensure `main` is up-to-date**: `git fetch origin main:main` (if exists) or `git fetch origin main`.
2324
- Record `git branch --show-current`, `git rev-parse main`, `git merge-base main HEAD`, and `git diff --stat main...HEAD`.
24-
- If `main` might be stale due to network restrictions, note that in the Review Summary.
25-
3. **Validate Spec Structure**
26-
- Run `uv run python scripts/check_docs.py`.
25+
3. **Validate State**
26+
- Run `uv run python scripts/check_docs.py` for spec integrity.
2727
- If validation fails, capture errors as new findings.
2828
4. **Re-evaluate Findings**
2929
- Compare the current repo state to the review file contents.
3030
- Remove or downgrade findings that no longer apply.
3131
- Add new findings based on the current diff/state, using deterministic IDs that continue existing sequences.
3232
5. **Update Review Report**
3333
- Keep the same template and formatting as `.cursor/commands/review.md`.
34-
- Update **Review Summary**, **Decision**, **Key Recommendations**, **Findings**, **Tests**, and **Metrics Summary**.
34+
- Update **Executive Summary**, **Findings**, **Strengths**, **Testing Results**, and **Metrics Summary**.
3535
- If no findings remain, set Decision to approve and Key Recommendations to “None”.
3636
- Keep deterministic IDs (`[Spec-1]`, `[Quality-2]`, etc.).
3737
6. **Write Changes In-Place**
3838
- Update the existing `reviews/REVIEW_*.md` file only; do not create a new review report.
3939

40-
## Output Template
40+
## Output Directives (CRITICAL)
4141

42-
Use the same section order and formatting defined in `.cursor/commands/review.md`:
42+
**Directive I: Report Integrity**
43+
- **Clean Output**: Never include "[N tools called]" lines or internal implementation details.
44+
- **Actual Commands**: The Testing Results section must show the actual commands run (e.g., `uv run ruff check .`) and their output.
45+
- **Explicit Skips**: List any skipped validations with reasons (e.g., "⏭️ Skipped: [command] - Reason: [reason]").
4346

44-
- **Review Summary** (Decision/Risk/Baseline)
45-
- **Key Recommendations (Top Priority)**
46-
- **Findings (ordered by severity)**
47-
- **Strengths**
48-
- **Tests**
49-
- **Questions for Author**
50-
- **Metrics Summary**
51-
52-
## Notes
53-
54-
- Do not reference any tool names in the report; keep the output consistent with existing review files in `reviews/`.
55-
- Do not add extra sections beyond the review template; keep the report layout stable.
47+
**Directive II: Structural Stability**
48+
- **Template Compliance**: Do not add extra sections beyond the review template; keep the report layout stable.
49+
- **No Tool Names**: Do not reference any tool names in the report; keep the output consistent with existing review files.
5650

5751
## Integration with Review Workflow
5852

0 commit comments

Comments
 (0)