Skip to content

Conversation

@nvpohanh
Copy link
Contributor

@nvpohanh nvpohanh commented Nov 21, 2025

Changes:

  • Update vLLM version to v0.11.2.
  • Remove custom_ops and cudagraph_mode that are no longer needed from compilation-config.
  • Remove VLLM_FLASHINFER_ALLREDUCE_FUSION_THRESHOLDS_MB env var that is no longer needed.
  • Add stream-interval: 20 for GPT-OSS to avoid being bottlenecked by host overheads in max throughput scenario.
  • Disable Attn+Q fusion on Llama4 since it does not work anymore.
  • Add configs for GPT-OSS + EAGLE3 (speculative decoding).
  • Rename cuda-graph-sizes to max-cudagraph-capture-size.

Changes:

- Update vLLM version to v0.11.2.
- Remove `custom_ops` and `cudagraph_mode` that are no longer needed
  from compilation-config.
- Remove `VLLM_FLASHINFER_ALLREDUCE_FUSION_THRESHOLDS_MB` env var that
  is no longer needed.
- Add `stream-interval: 20` for GPT-OSS to avoid being bottlenecked by
  host overheads in max throughput scenario.
- Disable Attn+Q fusion on Llama4 since it does not work anymore.

Signed-off-by: Po-Han Huang <[email protected]>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @nvpohanh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on updating and optimizing the vLLM recipes for Llama3, Llama4, and GPT-OSS models to align with the vLLM v0.11.2 release. The changes streamline compilation settings, remove obsolete environment variables, and introduce new parameters to enhance performance, particularly for GPT-OSS models, by improving streaming throughput and enabling speculative decoding. These updates ensure the recipes leverage the latest vLLM capabilities for efficient model serving.

Highlights

  • vLLM Version Update: All Llama3, Llama4, and GPT-OSS recipes have been updated to utilize vLLM v0.11.2, ensuring compatibility with the latest features and performance enhancements.
  • Streamlined Compilation Configurations: Redundant 'custom_ops' and 'cudagraph_mode' parameters have been removed from the 'compilation-config' across Llama3, Llama4, and GPT-OSS recipes, simplifying the configuration process.
  • Deprecated Environment Variable Removal: The 'VLLM_FLASHINFER_ALLREDUCE_FUSION_THRESHOLDS_MB' environment variable, which is no longer necessary, has been removed from the model launch scripts.
  • GPT-OSS Throughput Optimization: A 'stream-interval: 20' parameter has been introduced to GPT-OSS configurations to reduce host overheads and improve maximum throughput during streaming.
  • Llama4 Attn+Q Fusion Disabled: Attention and Q fusion has been explicitly disabled for Llama4 models, as this feature is no longer functional in the updated vLLM version.
  • GPT-OSS Speculative Decoding Support: New configurations have been added for GPT-OSS models to enable speculative decoding using EAGLE3, which is designed to enhance text generation speed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Llama and GPT-OSS recipes for vLLM v0.11.2. The changes include updating the Docker image version, removing obsolete configuration parameters, and adding new ones like stream-interval for GPT-OSS. Overall, the updates are well-aligned with the pull request's goals. However, I've identified a few issues, including a critical syntax error in a YAML configuration example within the documentation, a likely typo in a command-line argument that is used inconsistently across files, and some documentation text that might be confusing as it refers to parameters not used in the specific recipe. Addressing these points will improve the correctness and clarity of the recipes.

no-enable-prefix-caching: true
max-cudagraph-capture-size: 2048
max-num-batched-tokens: 8192
stream-interval 20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a YAML syntax error in this example configuration. The stream-interval key is missing a colon. This will cause an error for users who copy and paste this configuration.

Suggested change
stream-interval 20
stream-interval: 20

--tensor-parallel-size 1 \
--max-num-seqs 512 &
--max-num-seqs 512 \
--max-model-length 10240 &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The command-line argument --max-model-length appears to be a typo. The correct argument in vLLM is typically --max-model-len. This is used correctly in OpenAI/GPT-OSS.md in this same PR. Using the wrong argument name could cause the server to fail to start or to ignore this important parameter.

Suggested change
--max-model-length 10240 &
--max-model-len 10240 &

--tensor-parallel-size 1 \
--max-num-seqs 512 &
--max-num-seqs 512 \
--max-model-length 10240 &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The command-line argument --max-model-length appears to be a typo. The correct argument in vLLM is typically --max-model-len. This is used correctly in OpenAI/GPT-OSS.md in this same PR. Using the wrong argument name could cause the server to fail to start or to ignore this important parameter.

Suggested change
--max-model-length 10240 &
--max-model-len 10240 &

Comment on lines +258 to +259
- `Median Inter-Token Latency (ITL)`: The typical time delay between a response for the completion of one output token (or output tokens) and the next response for the completion of token(s).
- If the `--stream-interval 20` flag is added in the server command, the ITL will be the completion time for every 20 output tokens.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The explanation for Median Inter-Token Latency (ITL) mentions the --stream-interval 20 flag. However, this flag is not used in the Llama 3.3 recipe provided in this document. This could be confusing for users. Please consider removing this note or clarifying that it applies to other models/recipes.

Comment on lines +261 to +262
- `Median Inter-Token Latency (ITL)`: The typical time delay between a response for the completion of one output token (or output tokens) and the next response for the completion of token(s).
- If the `--stream-interval 20` flag is added in the server command, the ITL will be the completion time for every 20 output tokens.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The explanation for Median Inter-Token Latency (ITL) mentions the --stream-interval 20 flag. However, this flag is not used in the Llama 4 recipe provided in this document. This could be confusing for users. Please consider removing this note or clarifying that it applies to other models/recipes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant