Conversation
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Summary of ChangesHello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a temporary workaround in the test suite to address an issue where the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request skips the lm_head.weight comparison in the test_quant_model_reload test, which is a reasonable temporary measure to address a failing test, as indicated by the TODO comment. My review includes a suggestion to make the test more robust by dynamically identifying the language model head's weight key instead of hardcoding it. This will improve maintainability and resilience to model architecture changes.
| # Skip LM Head weight for now | ||
| # Note that the embedding is quantized | ||
| # TODO(@kylesayrs): this is a manifestation not using proper save context | ||
| if key == "lm_head.weight": |
There was a problem hiding this comment.
Hardcoding "lm_head.weight" makes this test brittle and dependent on the specific model's architecture. To make it more robust, consider dynamically identifying the output embedding layer's weight key.
You could achieve this by adding the following logic before the loop:
lm_head_key = None
if hasattr(model, "get_output_embeddings"):
output_embeddings = model.get_output_embeddings()
if output_embeddings is not None and hasattr(output_embeddings, "weight"):
for name, param in model.named_parameters():
if param is output_embeddings.weight:
lm_head_key = name
breakAnd then using lm_head_key in the comparison:
if key == lm_head_key:
continueThis would make the test more resilient to changes in the model architecture or if other models are used in this test in the future.
|
Better fix by #2362 |
No description provided.