Skip to content

fix transformers v5 & ministral compatibility#2470

Closed
leo-bujdei-leonte wants to merge 1 commit intovllm-project:mainfrom
leo-bujdei-leonte:transformers-v5
Closed

fix transformers v5 & ministral compatibility#2470
leo-bujdei-leonte wants to merge 1 commit intovllm-project:mainfrom
leo-bujdei-leonte:transformers-v5

Conversation

@leo-bujdei-leonte
Copy link
Copy Markdown

No description provided.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the project's compatibility with the transformers library by updating dependency constraints and resolving a critical model saving issue. It also refines the SmoothQuant quantization process to ensure specified layers are properly ignored and improves robustness against changes in transformers internal structure.

Highlights

  • Dependency Updates: The transformers dependency has been updated to remove its upper version bound, allowing compatibility with newer versions. The compressed-tensors dependency for non-release builds was also updated to a specific alpha version.
  • Model Saving Fix: A bug causing a KeyError during model saving with transformers has been addressed by explicitly removing the hf_device_map attribute before serialization.
  • SmoothQuant Improvement: The SmoothQuant module matching logic now correctly respects the ignore list, preventing unintended smoothing of specified layers.
  • Transformers Version Compatibility: A fallback import mechanism for TORCH_INIT_FUNCTIONS has been added to ensure broader compatibility with different transformers library versions.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • setup.py
    • Updated transformers dependency to remove the upper version bound.
    • Updated compressed-tensors dependency to 0.14.1a20260225 for non-release builds.
  • src/llmcompressor/modifiers/transform/smoothquant/base.py
    • Modified match_modules_set to pass the ignore list, ensuring it is respected during module matching.
  • src/llmcompressor/transformers/compression/compressed_tensors_utils.py
    • Added logic to delete hf_device_map from the model before saving to prevent a KeyError during revert_weight_conversion.
  • src/llmcompressor/utils/dev.py
    • Implemented a try-except block for importing TORCH_INIT_FUNCTIONS to handle its relocation in different transformers versions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates dependencies and code to be compatible with transformers v5. The changes look good, but I've found a potential typo in a dependency version in setup.py which could break package installation. Please take a look.

"compressed-tensors==0.13.0"
if BUILD_TYPE == "release"
else "compressed-tensors>=0.13.1a2"
else "compressed-tensors==0.14.1a20260225"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The version for compressed-tensors appears to have a typo in the year. 2026 is in the future, which will likely cause package installation to fail as this version probably doesn't exist. Did you mean to use 2024?

Suggested change
else "compressed-tensors==0.14.1a20260225"
else "compressed-tensors==0.14.1a20240225"

@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant