Skip to content

fix: add revision to vLLM loader#427

Open
majiayu000 wants to merge 2 commits intounslothai:mainfrom
majiayu000:fix/revision-vllm-fast-inference
Open

fix: add revision to vLLM loader#427
majiayu000 wants to merge 2 commits intounslothai:mainfrom
majiayu000:fix/revision-vllm-fast-inference

Conversation

@majiayu000
Copy link

Summary

  • Add revision parameter to load_vllm and pass through to vLLM engine args

Test Plan

  • Not run (no supported GPU in local env)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @majiayu000, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a revision parameter to the vLLM model loading utility. This enhancement provides greater control and flexibility by enabling users to explicitly define which version or revision of a model should be loaded, which is crucial for reproducibility and managing model updates.

Highlights

  • VLLM Loader Enhancement: The load_vllm function now accepts a revision parameter, allowing users to specify a particular model revision when loading vLLM models.
  • VLLM Engine Integration: The newly introduced revision parameter is passed directly into the vLLM engine's arguments, ensuring the specified model version is utilized.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a revision parameter to the load_vllm function to allow specifying a model revision, which is then passed to the vLLM engine arguments. My feedback includes a suggestion to add a type hint for the new parameter to improve code clarity and consistency.

gpu_memory_utilization : float = 0.8,
max_seq_length : int = 8192,
dtype : torch.dtype = None,
revision = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other parameters and for better code clarity, please add a type hint for the new revision parameter. Based on its usage, Optional[str] would be appropriate.

Suggested change
revision = None,
revision : Optional[str] = None,

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 1641 to 1644
max_seq_length : int = 8192,
dtype : torch.dtype = None,
revision = None,
training : bool = True,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep positional-arg compatibility for load_vllm

Inserting revision between dtype and training changes the positional argument order, so any external callers using positional arguments after dtype will now pass their training boolean into revision, shifting the rest of the parameters. That silently alters behavior (e.g., training flips back to default True, float8_kv_cache stays False, etc.) and can cause wrong runtime settings. To avoid a backward-compat regression, add revision at the end or make the remaining parameters keyword-only.

Useful? React with 👍 / 👎.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@Datta0
Copy link
Collaborator

Datta0 commented Jan 7, 2026

Ref: unslothai/unsloth#3816

Copy link
Collaborator

@Datta0 Datta0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@majiayu000
Copy link
Author

TFTR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants