You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* docs: rename reward model to judge model for consistency
* docs: add terminology note clarifying judge model vs reward model
* docs: fix terminology in build_reward.md - use composite rewards instead of judge models
Copy file name to clipboardExpand all lines: docs/building_graders/overview.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Building Custom Graders
2
2
3
-
Extend OpenJudge beyond built-in evaluators by creating custom graders or training reward models. Build domain-specific evaluation logic that seamlessly integrates with OpenJudge's evaluation pipeline.
3
+
Extend OpenJudge beyond built-in evaluators by creating custom graders or training judge models. Build domain-specific evaluation logic that seamlessly integrates with OpenJudge's evaluation pipeline.
4
4
5
5
6
6
## Why Build Custom Graders?
@@ -17,7 +17,7 @@ OpenJudge supports three paths for creating custom graders, each optimized for d
|**Generate from Data**| 1-4 hours | 50-500 examples | Iterative refinement, transparent rubrics | Medium setup + pay-per-query |
20
-
|**Train Reward Models**| 1-3 days | 1K-100K pairs | High-volume production (>1M queries/month) | High upfront, 10x lower per-query |
20
+
|**Train Judge Models**| 1-3 days | 1K-100K pairs | High-volume production (>1M queries/month) | High upfront, 10x lower per-query |
21
21
22
22
Use this decision tree to choose the right approach based on your data availability and requirements:
23
23
@@ -57,7 +57,7 @@ Use this decision tree to choose the right approach based on your data availabil
57
57
58
58
**Choose based on your situation:**
59
59
60
-
-**Have labeled data + need automation?** → Train a reward model
60
+
-**Have labeled data + need automation?** → Train a judge model
61
61
-**Have data + need fast iteration?** → Generate rubrics from data
62
62
-**No data + need immediate results?** → Create custom graders
63
63
@@ -75,19 +75,19 @@ Automatically generate evaluation rubrics and create graders. Two approaches ava
75
75
**Learn more:**[Generate Rubrics as Graders →](generate_rubrics_as_graders.md)
76
76
77
77
78
-
### Approach 3: Train Reward Models
78
+
### Approach 3: Train Judge Models
79
79
80
80
Train neural networks on preference data to learn evaluation criteria automatically. Supports Bradley-Terry (preference pairs), Generative Pointwise (absolute scores), and Generative Pairwise (comparison decisions). Requires 1K-100K examples and 1-3 days but delivers highly consistent evaluation at 10x lower per-query cost—ideal for high-volume scenarios exceeding 1M queries per month.
Copy file name to clipboardExpand all lines: docs/building_graders/training_judge_models.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,9 @@
2
2
3
3
Train judge models using three approaches: **SFT** for foundation learning, **Bradley-Terry** for scalar preference scoring, and **GRPO** for generative evaluation with reasoning.
4
4
5
+
!!! info "Terminology: Judge Model vs Reward Model"
6
+
In OpenJudge, we use **judge model** to refer to models trained for evaluation. This is the same concept as **reward model** commonly used in RLHF literature. Both terms describe models that assess and score AI outputs—we prefer "judge model" to emphasize the evaluation and assessment role.
Copy file name to clipboardExpand all lines: docs/get_started/build_reward.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -259,7 +259,7 @@ asyncio.run(main())
259
259
260
260
Running this code evaluates both responses across three quality dimensions and produces a training reward for each. These rewards can then feed into RLHF or DPO algorithms to optimize your chatbot. The output shows individual dimension scores alongside the final aggregated reward, helping you understand what drives the training signal.
261
261
262
-
You now have a foundation for building reward models. Start with a single grader to validate your setup, then progressively add more dimensions as needed. The key is choosing graders that align with your application's requirements and weighting them appropriately based on what matters most for your use case.
262
+
You now have a foundation for building composite rewards. Start with a single grader to validate your setup, then progressively add more dimensions as needed. The key is choosing graders that align with your application's requirements and weighting them appropriately based on what matters most for your use case.
Copy file name to clipboardExpand all lines: docs/get_started/core_concepts.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,9 @@ In the era of advanced AI systems, especially large language models (LLMs), havi
10
10
11
11
**Reward** mechanisms, on the other hand, provide signals that guide model training through techniques like Reinforcement Learning from Human Feedback (RLHF). These reward signals enable automated optimization, allowing systems to self-improve without constant human intervention by providing feedback on the quality of model outputs.
12
12
13
+
!!! info "Terminology: Judge Model vs Reward Model"
14
+
In OpenJudge, we use **judge model** to refer to models trained for evaluation. This is the same concept as **reward model** commonly used in RLHF literature. Both terms describe models that assess and score AI outputs—we prefer "judge model" to emphasize the evaluation and assessment role.
15
+
13
16
The OpenJudge framework unifies these two critical functions under a single abstraction: the Grader. A Grader is a modular, standardized component that can function as either an evaluator or a reward generator depending on your use case. As an **evaluator**, a Grader assesses model outputs against specific criteria. As a **reward generator**, a Grader provides signals that guide model training. This unified approach provides a consistent interface that simplifies the process of building, managing, and deploying both evaluation and reward systems, transforming raw model outputs into meaningful, quantifiable assessments that serve as the foundation for systematic model evaluation and automated model improvement.
|**Trajectory**| Multi-step reasoning paths and efficiency | Cost optimization, training reward models |
25
+
|**Trajectory**| Multi-step reasoning paths and efficiency | Cost optimization, training judge models |
26
26
27
27
!!! tip "Evaluation Strategy"
28
28
Start with **Final Response** evaluation to establish baseline success rates. When failures occur, use **Single Step** evaluation to pinpoint root causes. Use **Trajectory** evaluation to detect systemic issues like loops or inefficiencies.
0 commit comments