Skip to content

Conversation

@Kipok
Copy link
Collaborator

@Kipok Kipok commented Jan 31, 2026

Add the following fixes on top of #1152

  1. Add an appropriate handling for different baselines for hard prompts and writing prompts
  2. Add an appropriate judge prompt for writing which doesn't ask to give a response first
  3. Add category in metrics reporting and tests

Summary by CodeRabbit

Release Notes

  • New Features

    • Added evaluation support for the arena-hard-v2 benchmark with per-category metrics tracking and aggregation
    • Implemented category-specific judging for improved evaluation accuracy across different task types including hard prompts and creative writing
  • Documentation

    • Added comprehensive setup and execution guides for arena-hard and arena-hard-v2 evaluations, including sample commands, environment configuration, and results examples

✏️ Tip: You can customize this high-level summary in your review settings.

@Kipok Kipok requested a review from gwarmstrong January 31, 2026 02:20
@Kipok Kipok mentioned this pull request Jan 31, 2026
3 tasks
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment on lines +109 to +110
# will fail if category not in category_prompts as this is unexpected
prompt = self.category_prompts[category]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will raise KeyError for hard_prompt category since category_prompts only contains creative_writing

Suggested change
# will fail if category not in category_prompts as this is unexpected
prompt = self.category_prompts[category]
else:
# use category-specific prompt if available, otherwise fall back to default
prompt = self.category_prompts.get(category, self.prompt)

Signed-off-by: Igor Gitman <[email protected]>
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 52 to +59
super().update(predictions)
self.scores.append([])
self.agg_mode = f"pass@{len(predictions)}"

# Track category for per-category scoring (defaults to None for v1 compatibility)
category = predictions[0].get("category")
self.categories.append(category)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ArenaMetrics.update() now unconditionally appends predictions[0].get("category") to self.categories, but get_metrics() only uses per-category aggregation when len(set(self.categories)) > 1. If a mixed run includes some rows without category (or category=None) alongside real categories, unique_categories becomes {None, "..."} and you’ll emit a category_None bucket in metrics, which is usually unintended and makes downstream consumers brittle.

This shows up when evaluating older/partially-prepared datasets or when a pipeline forgets to propagate category for some items.

A safer approach is to only create per-category buckets for non-None categories (or to require that all entries have a non-null category before enabling breakdown).

Comment on lines +71 to +74
def setup_prompt(self):
if self.cfg.prompt_format == "openai":
return None

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setup_prompt() returns None for prompt_format == "openai", but fill_prompt() for openai delegates to super().fill_prompt(...) without any category-specific override. If arena-hard-v2 is run with prompt_format=openai, the creative_writing category will silently use the default judge prompt (and may reintroduce the “judge generates its own answer first” behavior this PR is trying to avoid).

If category-specific judging is required for v2, this likely needs an equivalent branch for the OpenAI prompt path (e.g., selecting different message templates/configs per category) or explicitly disallowing openai format for v2.

Comment on lines +64 to +71
# Create test.jsonl with category-specific baseline answers
with open(questions_file, "rt", encoding="utf-8") as fin, open(output_file, "wt", encoding="utf-8") as fout:
for line in fin:
data = json.loads(line)
data["question"] = data.pop("prompt")
category = data["category"]
data["baseline_answer"] = baseline_answers[data["uid"]][category]
fout.write(json.dumps(data) + "\n")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prepare.py assumes every question uid exists in baseline_answers and that it has an entry for the question’s category (baseline_answers[data["uid"]][category]). If the upstream dataset adds a new category, or if a baseline file is missing/partial, this will raise KeyError and stop dataset preparation.

Given v2 explicitly has multiple baselines by category, it would be safer to fail with a clearer error that prints the missing (uid, category) (or to handle unknown categories explicitly) so users can debug mismatched dataset/baseline versions.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

📝 Walkthrough

Walkthrough

This PR introduces arena-hard-v2 benchmark evaluation support, including new data preparation scripts, dataset configuration, per-category metrics tracking in arena metrics, creative writing prompt support in the arena judge, and comprehensive test coverage for the new functionality.

Changes

Cohort / File(s) Summary
Documentation
docs/evaluation/other-benchmarks.md
Added evaluation setup details for arena-hard and arena-hard-v2, including default judge models, data preparation commands, execution examples with environment variables, sample results with nested category metrics, and vllm server configuration.
Arena-Hard-V2 Dataset
nemo_skills/dataset/arena-hard-v2/__init__.py, nemo_skills/dataset/arena-hard-v2/prepare.py
Introduced new arena-hard-v2 module with default evaluation configuration (dataset_group, metrics_type, generation args, judge pipeline) and automated data preparation script that downloads questions and category-specific baselines, extracts answers, and generates enriched test dataset.
Arena-Hard Data Source
nemo_skills/dataset/arena-hard/prepare.py
Updated URL constants for questions and baseline data to point to new repository location (lmarena instead of lm-sys).
Arena Metrics Enhancement
nemo_skills/evaluation/metrics/arena_metrics.py
Added per-category metrics tracking and aggregation; categories are now extracted from predictions and metrics are computed separately for each category when multiple categories exist.
Arena Judge & Prompts
nemo_skills/inference/eval/arena_judge.py, nemo_skills/prompt/config/judge/arena_creative.yaml
Introduced category-aware prompt loading in arena judge with creative writing prompt override; added setup_prompt and fill_prompt methods to handle category-specific prompt selection and rendering.
Arena Metrics Tests
tests/test_arena_metrics.py
Added comprehensive test suite validating per-category scoring for arena-hard-v2, single-category handling for v1, data without categories, score parsing, and invalid score handling.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested labels

run GPU tests

Suggested reviewers

  • ekmb
  • titu1994
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 64.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add arena-hard v2' directly and clearly describes the main change: introducing arena-hard v2 support to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch igitman/arena-hard-v2

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@nemo_skills/inference/eval/arena_judge.py`:
- Around line 104-110: The code currently does direct lookup
self.category_prompts[category], which raises KeyError for categories like
"hard_prompt" that are present but unmapped; change the selection to safely
fallback to the default prompt by using a membership check or dict.get so that
prompt = self.category_prompts.get(category, self.prompt) (or if you prefer
check `if category in self.category_prompts` then assign accordingly) ensuring
data_point.get("category"), self.category_prompts, and self.prompt are used as
the referenced symbols.
🧹 Nitpick comments (4)
nemo_skills/evaluation/metrics/arena_metrics.py (1)

103-105: Consider adding strict=True to the zip call for defensive programming.

Since self.scores and self.categories are populated together in update(), their lengths should always match. Adding strict=True would catch any future bugs where they diverge.

Proposed fix
-        for score, category in zip(self.scores, self.categories):
+        for score, category in zip(self.scores, self.categories, strict=True):
docs/evaluation/other-benchmarks.md (1)

45-49: Add language specifier to the fenced code block.

The code block at line 45 is missing a language specifier for the results table.

Proposed fix
-```
+```text
 ------------------------------------------- arena-hard -------------------------------------------
tests/test_arena_metrics.py (1)

31-154: Consider adding arena-hard-v2 to GPU and slurm test suites.

The new per-category scoring logic and category-specific baselines (hard_prompt and creative_writing) represent significant evaluation changes. Based on CONTRIBUTING.md guidance, new benchmarks with complicated evaluation/metrics logic should be added to slurm tests for comprehensive evaluation, and GPU tests should be run in CI.

nemo_skills/dataset/arena-hard-v2/prepare.py (1)

31-38: Returning empty string silently masks missing assistant messages.

If no assistant message is found, returning "" will silently propagate an empty baseline answer. Per coding guidelines, errors should surface rather than fail silently. Consider raising an error or removing the fallback entirely to let the StopIteration or KeyError surface naturally.

♻️ Proposed refactor to let errors surface
 def extract_answer_text(data):
     """Extract the answer text from the baseline model's response format."""
     messages = data["messages"]
     for msg in messages:
         if msg["role"] == "assistant":
             content = msg["content"]
             return content["answer"] if isinstance(content, dict) else content
-    return ""
+    raise ValueError(f"No assistant message found in baseline data: {data.get('uid', 'unknown')}")

As per coding guidelines: "Follow the Zen of Python principles: prioritize... letting errors surface rather than silently failing".

Comment on lines +104 to +110
# Select the appropriate prompt based on category. If not defined, forcing fall-back to default prompt
category = data_point.get("category")
if not category:
prompt = self.prompt
else:
# will fail if category not in category_prompts as this is unexpected
prompt = self.category_prompts[category]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: hard_prompt category will raise KeyError instead of falling back to default prompt.

The current logic falls back to default prompt only when category is falsy (None/empty). However, for categories like hard_prompt that exist but aren't in category_prompts, line 110 will raise a KeyError.

Based on the comment at line 75-76 ("default prompt used for most categories including hard_prompt"), the intent is to fall back to default for unmapped categories.

Proposed fix
         # Select the appropriate prompt based on category. If not defined, forcing fall-back to default prompt
         category = data_point.get("category")
-        if not category:
-            prompt = self.prompt
-        else:
-            # will fail if category not in category_prompts as this is unexpected
-            prompt = self.category_prompts[category]
+        if category and category in self.category_prompts:
+            prompt = self.category_prompts[category]
+        else:
+            prompt = self.prompt
🤖 Prompt for AI Agents
In `@nemo_skills/inference/eval/arena_judge.py` around lines 104 - 110, The code
currently does direct lookup self.category_prompts[category], which raises
KeyError for categories like "hard_prompt" that are present but unmapped; change
the selection to safely fallback to the default prompt by using a membership
check or dict.get so that prompt = self.category_prompts.get(category,
self.prompt) (or if you prefer check `if category in self.category_prompts` then
assign accordingly) ensuring data_point.get("category"), self.category_prompts,
and self.prompt are used as the referenced symbols.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

data = json.loads(line)
data["question"] = data.pop("prompt")
category = data["category"]
data["baseline_answer"] = baseline_answers[data["uid"]][category]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assumes every uid has a baseline for its category, will raise KeyError if baseline data is missing/partial

Suggested change
data["baseline_answer"] = baseline_answers[data["uid"]][category]
if data["uid"] not in baseline_answers or category not in baseline_answers[data["uid"]]:
raise ValueError(f"Missing baseline answer for uid={data['uid']}, category={category}")
data["baseline_answer"] = baseline_answers[data["uid"]][category]

prompt = self.prompt
else:
# will fail if category not in category_prompts as this is unexpected
prompt = self.category_prompts[category]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will raise KeyError if category isn't in category_prompts (e.g., unknown category from dataset)

Per CONTRIBUTING.md guidelines: "Don't be overly defensive" - let it fail with a clear error. However, the error message should indicate which category is missing.

Suggested change
prompt = self.category_prompts[category]
# will fail if category not in category_prompts as this is unexpected
if category not in self.category_prompts:
raise KeyError(f"Category '{category}' not found in category_prompts. Available: {list(self.category_prompts.keys())}")
prompt = self.category_prompts[category]

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Comment on lines +109 to +113
if len(unique_categories) > 1:
for category, scores in category_scores.items():
cat_metrics = {"num_entries": len(scores)}
cat_metrics.update(get_aggregate_score(scores))
overall_metrics[f"category_{category}"] = cat_metrics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when mixed data includes category=None, creates category_None bucket in metrics which is brittle for downstream consumers

If arena-hard-v1 data (no category) is mixed with v2 data (with categories), unique_categories becomes {None, "hard_prompt", "creative_writing"} and you'll emit category_None in the output.

Consider only creating per-category buckets for non-None categories:

Suggested change
if len(unique_categories) > 1:
for category, scores in category_scores.items():
cat_metrics = {"num_entries": len(scores)}
cat_metrics.update(get_aggregate_score(scores))
overall_metrics[f"category_{category}"] = cat_metrics
# If we have multiple categories, compute per-category metrics
unique_categories = set(self.categories)
if len(unique_categories) > 1:
for category, scores in category_scores.items():
if category is not None: # Skip None category to avoid brittle category_None buckets
cat_metrics = {"num_entries": len(scores)}
cat_metrics.update(get_aggregate_score(scores))
overall_metrics[f"category_{category}"] = cat_metrics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants