Conversation
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
Summary of ChangesHello @arekay-nv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the metrics reporting mechanism to persist the summary output to a dedicated file rather than displaying it on the console. This change enhances the usability of the metrics by providing a persistent record of the test results, which can be easily reviewed or integrated into automated workflows. It also introduces greater flexibility in how the report content is formatted for various output targets. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR modifies the metrics reporting system to write the metrics summary to a file instead of printing to console. The changes add support for configurable newline characters to accommodate different output methods.
Changes:
- Modified the
Report.display()method to accept anewlineparameter for controlling line endings - Updated the test session to write the metrics report to
report.txtinstead of printing to console - Added
newlineparameter propagation through the internal_display_metricmethod
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/inference_endpoint/metrics/reporter.py | Added newline parameter to display methods and appended it to all formatted output strings |
| src/inference_endpoint/load_generator/session.py | Changed from console output to file output by opening report.txt and using f.write with explicit newlines |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| report_path = report_dir / "report.txt" | ||
| with open(report_path, "w") as f: | ||
| report.display(fn=f.write, newline="\n") | ||
| logger.info(f"Report saved to {report_path}") |
There was a problem hiding this comment.
The log message is inside the file context manager but appears after the report.display() call. If an exception occurs during report.display(), the file may not be properly written but the success message will still be logged. Move the log statement outside the with block to ensure it only executes after successful file writing.
| logger.info(f"Report saved to {report_path}") | |
| logger.info(f"Report saved to {report_path}") |
| newline=newline, | ||
| ) | ||
| fn("\n") | ||
| fn(f"\n{newline}") |
There was a problem hiding this comment.
Adding both a literal newline character '\n' and the newline parameter creates a double newline when newline='\n'. This should be either fn('\n') or fn(newline) depending on whether you want one or two newlines in the output.
| fn(f"\n{newline}") | |
| if newline: | |
| fn(f"{newline}{newline}") | |
| else: | |
| fn("\n") |
There was a problem hiding this comment.
Code Review
This pull request modifies the metrics reporting to allow dumping the summary to a file instead of only to the console. The implementation is correct and achieves the goal. I've suggested a refactoring in src/inference_endpoint/metrics/reporter.py to improve code clarity and maintainability by centralizing the newline handling logic. This will make future modifications to the display functions less error-prone.
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| RUN pip install --no-cache-dir -r requirements/base.txt -r requirements/test.txt | ||
| RUN pip install -e . | ||
| RUN sudo bash /mnt/inference-endpoint/examples/07_GPT-OSS-120B_SGLang_Example/setup_lcb.sh | ||
| # RUN sudo bash /mnt/inference-endpoint/examples/07_GPT-OSS-120B_SGLang_Example/setup_lcb.sh |
There was a problem hiding this comment.
Duplicate commented-out line should be removed as the same command already exists on line 46.
| # RUN sudo bash /mnt/inference-endpoint/examples/07_GPT-OSS-120B_SGLang_Example/setup_lcb.sh |
| if [[ $REPLY =~ ^[Yy]$ ]]; then | ||
| echo "Removing existing directory..." | ||
| rm -rf "${LCB_ROOT}" | ||
| # Non-interactive mode: check if stdin is a terminal |
There was a problem hiding this comment.
The comment is misleading - the code checks if stdin IS a terminal to enter interactive mode, not non-interactive mode. The comment should read: 'Interactive mode: check if stdin is a terminal'.
| # Non-interactive mode: check if stdin is a terminal | |
| # Interactive mode: check if stdin is a terminal |
Dumps the metrics summary to a file instead of to console.
This PR should run the gpt-oss-120b with performance and accuracy dataset end to end via the container. The only patch needed is to update the LCB evaluator with the following patch:
Once patched inside the container, the benchmark can be executed via
Which will run the performance and the accuracy datasets.
Type of change
Related issues
Testing
Checklist