Skip to content

Conversation

@XiaoBoAI
Copy link
Collaborator

@XiaoBoAI XiaoBoAI commented Jan 8, 2026

OpenJudge Version

0.2.0

Description

  • Replace training_grpo.md with comprehensive training_reward_models.md
  • Add LangSmith integration to mkdocs navigation
  • Update overview.md links to new training documentation
  • Refactor langfuse.md and langsmith.md integration docs
  • Minor fix in sft/README.md

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has been formatted with pre-commit run --all-files command
  • All tests are passing
  • Docstrings are in Google style
  • Related documentation has been updated (e.g. links, examples, etc.)
  • Code is ready for review

…ntegrations

- Replace training_grpo.md with comprehensive training_reward_models.md
- Add LangSmith integration to mkdocs navigation
- Update overview.md links to new training documentation
- Refactor langfuse.md and langsmith.md integration docs
- Minor fix in sft/README.md
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @XiaoBoAI, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly overhauls and enhances the documentation for OpenJudge, focusing on how users can train judge models and integrate with popular LLM observability platforms. The changes aim to provide a more structured, comprehensive, and user-friendly experience by consolidating training methodologies into a single guide and refining the integration tutorials for Langfuse and LangSmith. This ensures that users have clear, up-to-date resources for leveraging OpenJudge's capabilities.

Highlights

  • Consolidated Training Guide: Replaced the specific training_grpo.md with a new, comprehensive training_reward_models.md that covers SFT, Bradley-Terry, and GRPO methods for training judge models.
  • Documentation Navigation Update: The mkdocs.yml configuration has been updated to reflect the new training guide and to include a dedicated entry for LangSmith integration.
  • Integration Guide Refinements: Both langfuse.md and langsmith.md integration documents have been significantly refactored for improved clarity, consistency, and user experience, including updated structure, simplified explanations, and better code presentation.
  • Overview Page Link Updates: The overview.md document now correctly links to the new training_reward_models.md guide, ensuring users are directed to the most up-to-date training information.
  • Minor Markdown Fix: A small correction was made in cookbooks/training_judge_model/sft/README.md to update a mathematical notation for cross-entropy loss.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the documentation by adding a comprehensive guide on training reward models and refactoring the Langfuse and LangSmith integration guides for better clarity and structure. The changes are well-organized and make the documentation more user-friendly. I've left a few suggestions to improve consistency in the new training guide and to fix a code example in the Langfuse integration documentation.

for i, result in enumerate(grader_results):
trace_id = trace_id_mapping[i]
print(f"Sending {grader_name} score for trace {trace_id}")
send_result_to_langfuse(trace_id, grader_name, result)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The send_result_to_langfuse function is called here, but its definition appears later in the document. In Python, a function must be defined before it is called. This will result in a NameError when the code is executed. To make the example runnable, please move the definition of send_result_to_langfuse to be before the batch_evaluate_traces function.


### Training Objective

$$\mathcal{L} = -\sum_{t} \log P(y_t | y_{<t}, x)$$
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other LaTeX formulas in the repository (e.g., the one fixed in cookbooks/training_judge_model/sft/README.md in this same PR), it's better to use \mid for the conditional probability bar and \lt for the less-than symbol. This improves readability of the mathematical notation.

Suggested change
$$\mathcal{L} = -\sum_{t} \log P(y_t | y_{<t}, x)$$
$$\mathcal{L} = -\sum_{t} \log P(y_t \mid y_{\lt t}, x)$$

Comment on lines +73 to +80
| Parameter | Default | Description |
|-----------|---------|-------------|
| `MODEL_PATH` | `./models/Qwen3-14B` | Base model path |
| `TRAIN_BATCH_SIZE` | `96` | Global batch size |
| `MICRO_BATCH_SIZE` | `12` | Per-GPU micro batch |
| `MAX_LENGTH` | `4096` | Maximum sequence length |
| `SP_SIZE` | `8` | Sequence parallel size |
| `TOTAL_EPOCHS` | `1` | Training epochs |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The MODEL_PATH default value ./models/Qwen3-14B uses a local-style path. This is inconsistent with the GRPO section, which uses a HuggingFace model ID (Qwen/Qwen3-8B). For clarity and consistency across the document, it would be better to use HuggingFace model IDs for all examples. A similar issue exists in the Bradley-Terry configuration table (line 150).

@ployts ployts changed the title docs(building_graders): add training reward models guide and update i… docs(building_graders): add training reward models guide and update integrations Jan 8, 2026
@helloml0326 helloml0326 merged commit c727399 into main Jan 8, 2026
2 checks passed
@ployts ployts deleted the docs/update-training-and-integrations branch January 9, 2026 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants