Skip to content

chore: Fix docker file for LCB#98

Closed
arekay-nv wants to merge 3 commits intomainfrom
arekay/cleanup_report_output
Closed

chore: Fix docker file for LCB#98
arekay-nv wants to merge 3 commits intomainfrom
arekay/cleanup_report_output

Conversation

@arekay-nv
Copy link
Copy Markdown
Collaborator

@arekay-nv arekay-nv commented Jan 15, 2026

Dumps the metrics summary to a file instead of to console.

This PR should run the gpt-oss-120b with performance and accuracy dataset end to end via the container. The only patch needed is to update the LCB evaluator with the following patch:

diff --git a/lcb_runner/evaluation/compute_code_generation_metrics.py b/lcb_runner/evaluation/compute_code_generation_metrics.py
index b8de33e..e55efd9 100644
--- a/lcb_runner/evaluation/compute_code_generation_metrics.py
+++ b/lcb_runner/evaluation/compute_code_generation_metrics.py
@@ -206,7 +206,7 @@ def codegen_metrics(
             final_metadata[i] = [json.dumps(x) for x in final_metadata[i]]
 
         assert len(final_metadata[i]) == len(
-            generations_list[0]
+            generations_list[i]
         ), f"{len(final_metadata[i])=}"
 
     return [metrics, results, final_metadata]

Once patched inside the container, the benchmark can be executed via

inference-endpoint benchmark from-config -c examples/04_GPTOSS120B_Example/sglang_gptoss_120b_example.yaml

Which will run the performance and the accuracy datasets.

Type of change

  • Bug fix
  • New feature
  • Documentation update
  • Refactor/cleanup

Related issues

Testing

  • Tests added/updated
  • All tests pass locally
  • Manual testing completed

Checklist

  • Code follows project style
  • Pre-commit hooks pass
  • Documentation updated (if needed)

Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
@arekay-nv arekay-nv requested a review from a team as a code owner January 15, 2026 05:00
Copilot AI review requested due to automatic review settings January 15, 2026 05:00
@github-actions
Copy link
Copy Markdown

github-actions bot commented Jan 15, 2026

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@github-actions github-actions bot requested a review from nvzhihanj January 15, 2026 05:00
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @arekay-nv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the metrics reporting mechanism to persist the summary output to a dedicated file rather than displaying it on the console. This change enhances the usability of the metrics by providing a persistent record of the test results, which can be easily reviewed or integrated into automated workflows. It also introduces greater flexibility in how the report content is formatted for various output targets.

Highlights

  • Metrics Report Output: The metrics summary, previously printed to the console, is now dumped to a report.txt file within the designated report directory.
  • Flexible Output Formatting: The Report.display method and its internal helper _display_metric in reporter.py have been updated to accept a newline argument, allowing for more flexible control over line endings when writing output to different destinations (e.g., console vs. file).

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR modifies the metrics reporting system to write the metrics summary to a file instead of printing to console. The changes add support for configurable newline characters to accommodate different output methods.

Changes:

  • Modified the Report.display() method to accept a newline parameter for controlling line endings
  • Updated the test session to write the metrics report to report.txt instead of printing to console
  • Added newline parameter propagation through the internal _display_metric method

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/inference_endpoint/metrics/reporter.py Added newline parameter to display methods and appended it to all formatted output strings
src/inference_endpoint/load_generator/session.py Changed from console output to file output by opening report.txt and using f.write with explicit newlines

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

report_path = report_dir / "report.txt"
with open(report_path, "w") as f:
report.display(fn=f.write, newline="\n")
logger.info(f"Report saved to {report_path}")
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log message is inside the file context manager but appears after the report.display() call. If an exception occurs during report.display(), the file may not be properly written but the success message will still be logged. Move the log statement outside the with block to ensure it only executes after successful file writing.

Suggested change
logger.info(f"Report saved to {report_path}")
logger.info(f"Report saved to {report_path}")

Copilot uses AI. Check for mistakes.
newline=newline,
)
fn("\n")
fn(f"\n{newline}")
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding both a literal newline character '\n' and the newline parameter creates a double newline when newline='\n'. This should be either fn('\n') or fn(newline) depending on whether you want one or two newlines in the output.

Suggested change
fn(f"\n{newline}")
if newline:
fn(f"{newline}{newline}")
else:
fn("\n")

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request modifies the metrics reporting to allow dumping the summary to a file instead of only to the console. The implementation is correct and achieves the goal. I've suggested a refactoring in src/inference_endpoint/metrics/reporter.py to improve code clarity and maintainability by centralizing the newline handling logic. This will make future modifications to the display functions less error-prone.

Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
@arekay-nv arekay-nv changed the title chore: Dump metrics report to file chore: Fix docker file for LCB Jan 17, 2026
@arekay-nv arekay-nv requested a review from Copilot January 20, 2026 05:33
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

RUN pip install --no-cache-dir -r requirements/base.txt -r requirements/test.txt
RUN pip install -e .
RUN sudo bash /mnt/inference-endpoint/examples/07_GPT-OSS-120B_SGLang_Example/setup_lcb.sh
# RUN sudo bash /mnt/inference-endpoint/examples/07_GPT-OSS-120B_SGLang_Example/setup_lcb.sh
Copy link

Copilot AI Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate commented-out line should be removed as the same command already exists on line 46.

Suggested change
# RUN sudo bash /mnt/inference-endpoint/examples/07_GPT-OSS-120B_SGLang_Example/setup_lcb.sh

Copilot uses AI. Check for mistakes.
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Removing existing directory..."
rm -rf "${LCB_ROOT}"
# Non-interactive mode: check if stdin is a terminal
Copy link

Copilot AI Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment is misleading - the code checks if stdin IS a terminal to enter interactive mode, not non-interactive mode. The comment should read: 'Interactive mode: check if stdin is a terminal'.

Suggested change
# Non-interactive mode: check if stdin is a terminal
# Interactive mode: check if stdin is a terminal

Copilot uses AI. Check for mistakes.
@arekay-nv
Copy link
Copy Markdown
Collaborator Author

Closing as features supported by #105 and #107

@arekay-nv arekay-nv closed this Jan 29, 2026
@github-actions github-actions bot locked and limited conversation to collaborators Jan 29, 2026
@arekay-nv arekay-nv deleted the arekay/cleanup_report_output branch April 2, 2026 03:05
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants