feat(paper_review): add rebuttal generation and assessment support#155
Merged
feat(paper_review): add rebuttal generation and assessment support#155
Conversation
Add two new capabilities to the paper review pipeline: - Rebuttal Generation: drafts point-by-point rebuttals with [TODO] placeholders for items requiring new experiments, proofs, or data the author must provide - Rebuttal Assessment: evaluates whether an author's rebuttal adequately addresses reviewer concerns and provides an updated recommendation score (1-6) New files: - graders/rebuttal_generation.py, graders/rebuttal_assessment.py - prompts/rebuttal_generation.py, prompts/rebuttal_assessment.py - examples/rebuttal_workflow.py Modified files: - schema.py: RebuttalResult, RebuttalAssessmentResult, new ReviewStage values - pipeline.py: generate_rebuttal(), assess_rebuttal(), review_and_report() extension - report.py: rebuttal draft and assessment report sections - __init__.py files: export new classes Made-with: Cursor
353cc40 to
af941b2
Compare
…structions support Bring rebuttal prompts to the same professional academic level as review prompts: - Deep integration with DisciplineConfig: evaluation_dimensions, correctness_categories, reviewer_context, scoring_notes all flow into rebuttal generation and assessment - Venue-aware: rebuttal generation adapts to venue conventions; assessment applies venue-specific acceptance bar and contribution standards - Author instructions support for rebuttal generation (same as review prompt) - Professional AC identity with high-standards calibration for assessment - Structured assessment framework: Relevance, Evidence Strength, Completeness, Verifiability, Honesty — mirroring the rigor of the review evaluation dimensions - Explicit score update rules: increase/decrease/maintain with concrete criteria - Graders updated to pass venue and instructions parameters through to prompts Made-with: Cursor
8051071 to
e718468
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add two new capabilities to the paper review pipeline:
New files:
Modified files:
Made-with: Cursor
OpenJudge Version
[The version of OpenJudge you are working on, e.g.
import openjudge; print(openjudge.__version__)]Description
[Please describe the background, purpose, changes made, and how to test this PR]
Checklist
Please check the following items before code is ready to be reviewed.
pre-commit run --all-filescommand