Conversation
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
Summary of ChangesHello @anandhu-eng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new feature that enhances compatibility with external submission checkers, particularly for MLPerf benchmarks. It allows the system to automatically generate a Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This PR introduces a compatibility mode for a submission checker, which generates a user.conf file from the benchmark's runtime settings. The changes include a new CLI flag, a configuration option, a utility function for the conversion, and comprehensive tests. My review highlights a bug where the new CLI flag is not correctly used, an issue with overly broad exception handling that could hide errors, and a suggestion to reconsider the default setting for the new compatibility mode to ensure a better user experience. The feature is otherwise well-implemented with good test coverage.
There was a problem hiding this comment.
Pull request overview
This PR adds submission checker compatibility mode to enable MLPerf loadgen compatibility by generating a user.conf file from runtime settings. The feature is enabled by default and converts endpoints-specific configuration keys to MLPerf loadgen format.
- Introduces key mapping from endpoints runtime settings to MLPerf loadgen keys via a new constants module
- Adds configuration flag
ensure_submission_checker_compatibility(default: True) to control the feature - Implements
user.confgeneration that transforms runtime_settings.json into MLPerf-compatible format
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| src/inference_endpoint/config/constants.py | New file defining mapping between endpoints keys and MLPerf loadgen keys |
| src/inference_endpoint/config/schema.py | Adds ensure_submission_checker_compatibility boolean field (default: True) to BenchmarkConfig |
| src/inference_endpoint/cli.py | Adds CLI argument --ensure-submission-checker-compatibility flag |
| src/inference_endpoint/commands/utils.py | Implements generate_user_conf_submission_checker() function to create user.conf from runtime_settings.json |
| src/inference_endpoint/commands/benchmark.py | Integrates user.conf generation after benchmark execution when compatibility mode is enabled |
| tests/unit/commands/test_utils.py | Adds comprehensive test suite covering success, error, and edge cases for user.conf generation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@tanvi-mlcommons @anandhu-eng is this ready to review - there are some changes requested. |
|
Hi @arekay-nv , the comments is addressed in latest PR. I think its good to go for a review. I will retest the outputs with latest changes this week. |
nv-alicheng
left a comment
There was a problem hiding this comment.
Approving - Looks fine, if it is functional it should be good to merge. We will revisit in either next phase or when we get around to fleshing out the CLI with Typer
What does this PR do?
Type of change
Related issues
Testing
Checklist