-
-
Notifications
You must be signed in to change notification settings - Fork 11.5k
[Model] Restore Gemma3 GGUF multimodal support with GGUF-only guards #29198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively restores multimodal support for Gemma3 GGUF models by introducing robust file-format-based guards. The approach is sound and the defense-in-depth mechanism is a good practice to prevent regressions on HuggingFace models. My review focuses on performance optimizations within the newly restored generate_attention_masks method, suggesting more idiomatic and efficient PyTorch constructs to improve performance on this critical path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Restores custom attention mask generation for Gemma3 GGUF multimodal models that was partially reverted in vllm-project#28995. Implements robust GGUF-only guards to ensure the feature only applies to GGUF models and does not affect HF models. Changes: - Add uses_custom_attention_masks() utility with GGUF file format check - Add uses_custom_attention_masks property to ModelConfig - Initialize uses_custom_attention_masks in GPUModelRunner - Restore generate_attention_masks() method to Gemma3ForConditionalGeneration - Implement 3-layer defense-in-depth guard mechanism The implementation uses check_gguf_file() to guarantee that custom attention mask logic only triggers for GGUF files, preventing the issue that caused the original revert where HF models incorrectly triggered the custom logic. Tested with GGUF models (1B, 4B, 270M) for both text-only and multimodal inference. HF model compatibility verified via pytest multimodal test suite. Signed-off-by: Luciano Martins <[email protected]>
ed24445 to
7ac2e2d
Compare
|
Hi @Isotr0py / @DarkLight1337, It is a quick one - reintroducing #27772, but now with guardrails to avoid the problems that caused the PR to be reverted via #28995. It is pretty much all reviewed (as not much changed since #27772) and ready to go :) |
Summary
This PR restores custom attention mask generation for Gemma3 GGUF multimodal models that was partially reverted in #28995. The implementation uses robust GGUF-only file format guards to ensure the feature exclusively applies to GGUF models and does not affect HuggingFace models.
Resolves: #28995 (HF model regression)
Restores functionality from: #27772
Background
PR #27772 initially added Gemma3 GGUF multimodal support, enabling users to run quantized Gemma3 multimodal models with both text-only and image+text prompts. However, it was partially reverted in #28995 because the custom attention mask logic incorrectly triggered for HuggingFace models, causing test failures.
Root cause of #28995: The original implementation lacked file format guards, causing the custom attention mask generation to activate for both GGUF and HF models.
Solution
This PR addresses the regression by implementing a 3-layer defense-in-depth guard mechanism:
Layer 1: Model Format Check (Primary Guard)
Layer 2: Multimodal Feature Check
Layer 3: Method Existence Check
Result: HF models never have
uses_custom_attention_masks = True, preventing the issue that caused #28995.Changes
Files Modified (4)
vllm/transformers_utils/config.pyuses_custom_attention_masks()utility functioncheck_gguf_file()vllm/config/model.pyuses_custom_attention_masksproperty toModelConfigvllm/v1/worker/gpu_model_runner.pyuses_custom_attention_masksattribute inGPUModelRunnervllm/model_executor/models/gemma3_mm.pygenerate_attention_masks()methodTest Plan
GGUF Model Validation
Tested with multiple quantized Gemma3 GGUF models to ensure functionality across different model sizes:
Text-Only Inference:
Multimodal Inference:
mmproj.ggufvision towerHuggingFace Model Regression Testing
Executed the full vLLM multimodal test suite to verify zero impact on HF models:
pytest -s -v tests/models/multimodal/generation/test_common.py -k "gemma3-test"This ensures the GGUF guards prevent any unintended activation of custom attention mask logic for HuggingFace models.
Test Results
GGUF Model Results (All Pass)
Multimodal Output Example:
HuggingFace Model Regression Test (All Pass)
pytest -s -v tests/models/multimodal/generation/test_common.py -k gemma3-test # Result: 8 passed, 335 deselected, 23 warnings in 915.69s (15m 15s)Test Coverage:
Verification of Fix for #28995
The failing test from #28995 (
pytest gemma3-test) now passes completely:Why it works now:
uses_custom_attention_masksreturnsFalsefor HF models (no.gguffile detected)Isolation & Safety Guarantees
How HF Models Are Protected:
File Format Check:
Short-Circuit Logic:
Runtime Guard:
What Changed from #27772:
Code Quality
ruff check,ruff format,mypy)Backward Compatibility
Documentation
No user-facing documentation changes required. The feature is transparent to users - GGUF Gemma3 multimodal models work automatically without configuration.
Release Notes
This fix should be included in release notes as:
Checklist
[Model] <description>git commit -s)Related PRs: