-
Notifications
You must be signed in to change notification settings - Fork 276
Gemma3-4b QNN example fixes #2106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
qti-kromero
wants to merge
26
commits into
microsoft:main
Choose a base branch
from
CodeLinaro:dev/qti-kromero/gemma3
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 13 commits
Commits
Show all changes
26 commits
Select commit
Hold shift + click to select a range
d494c82
Initial commit
qti-kromero ddf3ea8
Add README and start config
qti-kromero 1f54074
QuaRot passing, working on GptqQuantizer
qti-kromero 6cae95f
Work on dataset integration
qti-kromero 2d0872e
Data processing works
qti-kromero 6a6f67d
Fix lint issues and cleanup
qti-kromero cd24ddf
Adding vision resources
qti-kromero 636e982
Add Gemma3 vision configurations
qti-kromero b4ea7a3
Fix linting error
qti-kromero 1f69af3
Vision model onnx conversion working
qti-kromero aed20ec
Enable quant on text model
qti-kromero ba0633c
Improve README
qti-kromero 5ad910d
Merge remote-tracking branch 'origin/main' into dev/qti-kromero/gemma3
qti-kromero acbdfdc
Add files from Prudvhi
qti-kromero f7178ae
Updates
qti-kromero bd70ff4
Updates
qti-kromero c962cee
Add olive requirements file
prudhvi-qti 360d9c2
update
qti-kromero 5fcda5c
Update Olive scripts for gemma3
prudhvi-qti 14018ee
Update few python packages
prudhvi-qti 1f89241
Use the same llava dataset for text model as well
prudhvi-qti 7d4ced8
Minor cleanup
qti-kromero a0bd703
Add system requirements
prudhvi-qti f712bdc
Merge remote-tracking branch 'origin/main' into dev/qti-kromero/gemma3
qti-kromero f685073
Remove examples
qti-kromero 5dff155
Fix review comments
qti-kromero File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,122 @@ | ||
| # Gemma-3-4B Model Optimization | ||
|
|
||
| This repository demonstrates the optimization of the [Google Gemma-3-4B](https://huggingface.co/google/gemma-3-4b-it) model using **post-training quantization (PTQ)** techniques for QNN (Qualcomm Neural Network) execution. The optimization process utilizes an environment based heavily upon the [PTQ tutorial for Phi-3.5](https://github.com/CodeLinaro/Olive/blob/main/examples/phi3_5/README.md) | ||
|
|
||
| ## File Overview | ||
|
|
||
| This example contains the following key files: | ||
|
|
||
| - **`env_setup.sh`** - Automated environment setup script (Linux only) | ||
| - **`gemma3-4b-text-qnn-config.json`** - Olive configuration for optimizing the text component | ||
| - **`gemma3-4b-vision-qnn-config.json`** - Olive configuration for optimizing the vision component | ||
| - **`user_script.py`** - Dataset handling and preprocessing utilities | ||
| - **`custom_gemma3_4b_it_vision.py`** - Vision model loader for the optimization pipeline | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| ### System Requirements | ||
| - **Operating System**: Linux (automated setup script is Linux-only) | ||
| - **Python**: 3.10 | ||
| - **Package Manager**: [uv](https://docs.astral.sh/uv/getting-started/installation/#installation-methods) | ||
| - **Storage**: ~13GB for COCO train2017 dataset (downloaded automatically) | ||
|
|
||
| ### Dependencies Installed by Setup Script | ||
| The `env_setup.sh` script installs the following components: | ||
| - setuptools (for building Olive from source) | ||
| - Olive requirements and dependencies | ||
| - AutoGPTQ (from source) | ||
| - GPTQModel (specific commit: `558449bed3ef2653c36041650d30da6bbbca440d`) | ||
| - onnxruntime-qnn (pre-release version) | ||
|
|
||
| ## Setup Instructions | ||
|
|
||
| ### Automated Setup (Recommended) | ||
| ```bash | ||
| source env_setup.sh | ||
| ``` | ||
|
|
||
| ### Manual Setup (Alternative) | ||
| If you prefer to set up manually or need to troubleshoot: | ||
|
|
||
| 1. Install setuptools: | ||
| ```bash | ||
| uv pip install setuptools | ||
| ``` | ||
|
|
||
| 2. Install requirements: | ||
| ```bash | ||
| uv pip install -r ../requirements.txt | ||
| uv pip install -r ../../../requirements.txt | ||
| ``` | ||
|
|
||
| 3. Install AutoGPTQ from source: | ||
| ```bash | ||
| export BUILD_CUDA_EXT=0 | ||
| uv pip install --no-build-isolation git+https://github.com/PanQiWei/AutoGPTQ.git | ||
| ``` | ||
|
|
||
| 4. Install GPTQModel with Gemma3 fix: | ||
| ```bash | ||
| uv pip install --no-build-isolation git+https://github.com/ModelCloud/GPTQModel.git@558449bed3ef2653c36041650d30da6bbbca440d | ||
| ``` | ||
|
|
||
| 5. Install onnxruntime-qnn: | ||
| ```bash | ||
| uv pip install -r https://raw.githubusercontent.com/microsoft/onnxruntime/refs/heads/main/requirements.txt | ||
| uv pip install -U --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple onnxruntime-qnn --no-deps | ||
| ``` | ||
|
|
||
| > **Important:** The setup uses a specific commit hash for GPTQModel (`558449bed3ef2653c36041650d30da6bbbca440d`) to address a [memory leak issue](https://github.com/ModelCloud/GPTQModel/commit/558449bed3ef2653c36041650d30da6bbbca440d) with Gemma3 models. | ||
|
|
||
| ## Optimization Process | ||
|
|
||
| Since Gemma-3-4B is a multi-modal model composed of both vision and text components, the strategy for optimizing it through Olive is to operate on the constituent models separately before configuring them to work together at the onnxruntime-genai stage. | ||
|
|
||
| ### Configuration Differences | ||
|
|
||
| **Text Configuration (`gemma3-4b-text-qnn-config.json`)**: | ||
| - Uses HuggingFace model directly (`google/gemma-3-4b-it`) | ||
| - Applies comprehensive optimization pipeline: QuaRot → GptqModel → ModelBuilder → Quantization | ||
| - Outputs to: `models/gemma-3-4b-it-text/` | ||
|
|
||
| **Vision Configuration (`gemma3-4b-vision-qnn-config.json`)**: | ||
| - Uses custom PyTorch model loader (`custom_gemma3_4b_it_vision.py`) | ||
| - Simpler pipeline: ONNX Conversion → Graph Surgery → Quantization | ||
| - Outputs to: `models/gemma-3-4b-it-vision/` | ||
|
|
||
| ### Running Optimization | ||
|
|
||
| Execute the following commands to separately produce optimized binaries for each component: | ||
|
|
||
| ```bash | ||
| olive run --config gemma3-4b-text-qnn-config.json | ||
| ``` | ||
|
|
||
| ```bash | ||
| olive run --config gemma3-4b-vision-qnn-config.json | ||
| ``` | ||
|
|
||
| ## Expected Outputs | ||
|
|
||
| After successful optimization, you will find: | ||
|
|
||
| - **Text model outputs**: `models/gemma-3-4b-it-text/` | ||
| - **Vision model outputs**: `models/gemma-3-4b-it-vision/` | ||
| - **Cache directory**: `cache/` (intermediate files and downloaded datasets) | ||
| - **Dataset**: `.cache/train2017/` (COCO train2017 images, ~13GB) | ||
|
|
||
| Both configurations use `"no_artifacts": true`, meaning only the final optimized models are retained. | ||
|
|
||
| ## Troubleshooting | ||
|
|
||
| ### Common Issues | ||
|
|
||
| **Insufficient Storage**: The COCO train2017 dataset requires ~13GB of storage and is downloaded automatically to `.cache/train2017/`. | ||
|
|
||
| **Memory Requirements**: The optimization process, particularly for the text model with its comprehensive pipeline, requires substantial memory. | ||
|
|
||
| **QNN Provider**: Ensure the QNNExecutionProvider is properly installed and configured in your environment. | ||
|
|
||
| **Platform Limitation**: The current setup script is designed for Linux only. Windows/macOS users will need to adapt the manual setup steps. | ||
|
|
||
| **Dataset Download**: If the COCO dataset download fails, check your internet connection and available storage. The script uses `wget` which must be available on your system. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,36 @@ | ||
| # ------------------------------------------------------------------------- | ||
| # Copyright (c) Microsoft Corporation. All rights reserved. | ||
| # Licensed under the MIT License. | ||
| # -------------------------------------------------------------------------- | ||
|
|
||
|
|
||
| import logging | ||
|
|
||
| import torch | ||
| from transformers import AutoModel | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| class Gemma3VisualEmbeddingGenerator(torch.nn.Module): | ||
| def __init__(self, full_model): | ||
| super().__init__() | ||
| # Extract only the vision components | ||
| self.vision_tower = full_model.vision_tower | ||
| self.multi_modal_projector = full_model.multi_modal_projector | ||
|
|
||
| def forward(self, pixel_values): | ||
| # Process images through vision tower | ||
| image_outputs = self.vision_tower(pixel_values, output_hidden_states=True) | ||
| selected_image_feature = image_outputs.last_hidden_state | ||
| # Project to final embedding space | ||
| return self.multi_modal_projector(selected_image_feature) | ||
|
|
||
|
|
||
| def load_gemma3_model(model_path): | ||
| full_model = AutoModel.from_pretrained("google/gemma-3-4b-it") | ||
| logger.info("Loaded full model: %s", full_model) | ||
|
|
||
| vision_model = Gemma3VisualEmbeddingGenerator(full_model) | ||
| logger.info("Created vision-only model: %s", vision_model) | ||
| return vision_model |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,28 @@ | ||
| #!/bin/bash | ||
| # ------------------------------------------------------------------------- | ||
| # Copyright (c) Microsoft Corporation. All rights reserved. | ||
| # Licensed under the MIT License. | ||
| # -------------------------------------------------------------------------- | ||
|
|
||
| # Installing setuptools to build Olive from source | ||
| uv pip install setuptools | ||
|
|
||
| # Requires installation of uv | ||
| uv pip install -r ../requirements.txt | ||
qti-kromero marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| # Require installation of Olive dependencies | ||
| uv pip install -r ../../../requirements.txt | ||
|
|
||
| # Disable CUDA extension build | ||
| export BUILD_CUDA_EXT=0 | ||
|
|
||
| # Install AutoGPTQ from source | ||
| uv pip install --no-build-isolation git+https://github.com/PanQiWei/AutoGPTQ.git | ||
|
|
||
| # Install GptqModel from source | ||
| # Note: Commit hash corresponds to commit which fixes Gemma 3 memory leak issue. See README.md for additional details. | ||
| uv pip install --no-build-isolation git+https://github.com/ModelCloud/GPTQModel.git@558449bed3ef2653c36041650d30da6bbbca440d | ||
|
|
||
| # Install onnxruntime-qnn without installing onnxruntime | ||
| uv pip install -r https://raw.githubusercontent.com/microsoft/onnxruntime/refs/heads/main/requirements.txt | ||
| uv pip install -U --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple onnxruntime-qnn --no-deps | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,80 @@ | ||
| { | ||
|
||
| "input_model": { "type": "HfModel", "model_path": "google/gemma-3-4b-it" }, | ||
| "systems": { | ||
| "qnn_system": { | ||
| "type": "PythonEnvironment", | ||
| "python_environment_path": "/local/mnt2/workspace/kromero/olive/olive-venv/bin", | ||
| "accelerators": [ { "execution_providers": [ "QNNExecutionProvider" ] } ] | ||
| } | ||
| }, | ||
| "data_configs": [ | ||
| { | ||
| "name": "gemma_text_data_config", | ||
qti-kromero marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| "user_script": "user_script.py", | ||
| "load_dataset_config": { "type": "gemma_text_dataset", "model_id": "google/gemma-3-4b-it" } | ||
| } | ||
| ], | ||
| "passes": { | ||
| "q": { "type": "QuaRot", "device": "cpu" }, | ||
qti-kromero marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| "g": { | ||
qti-kromero marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| "type": "GptqModel", | ||
| "bits": 4, | ||
| "sym": true, | ||
| "group_size": -1, | ||
| "lm_head": false, | ||
| "device": "cuda", | ||
| "data_config": "gemma_text_data_config" | ||
| }, | ||
| "cs": { "type": "CaptureSplitInfo", "num_splits": 4, "unique_embeds_lm_head_splits": true }, | ||
| "mb": { | ||
| "type": "ModelBuilder", | ||
| "precision": "int4", | ||
| "int4_block_size": 32, | ||
| "int4_accuracy_level": 4, | ||
| "int4_op_types_to_quantize": [ "MatMul", "Gather" ] | ||
| }, | ||
| "mq": { | ||
| "type": "MatMulNBitsToQDQ", | ||
| "use_int4": true, | ||
| "add_zero_point": true, | ||
| "nodes_to_exclude": [ "/lm_head/MatMul_Q4" ], | ||
| "save_as_external_data": true | ||
| }, | ||
| "gs": { | ||
| "type": "GraphSurgeries", | ||
| "surgeries": [ | ||
| { "surgeon": "RemoveRopeMultiCache" }, | ||
| { "surgeon": "AttentionMaskToSequenceLengths" }, | ||
| { "surgeon": "SimplifiedLayerNormToL2Norm" } | ||
| ], | ||
| "save_as_external_data": true | ||
| }, | ||
| "sq": { | ||
| "type": "OnnxStaticQuantization", | ||
| "data_config": "gemma_text_data_config", | ||
| "activation_type": "uint16", | ||
| "precision": "uint8", | ||
| "calibration_providers": [ "CUDAExecutionProvider" ], | ||
| "quant_preprocess": true, | ||
| "op_types_to_exclude": [ "GatherBlockQuantized", "GroupQueryAttention", "MatMulNBits" ], | ||
| "save_as_external_data": true | ||
| }, | ||
| "sp": { "type": "SplitModel" }, | ||
| "st": { "type": "StaticLLM", "batch_size": 1, "context_length": 64 }, | ||
| "cb": { | ||
| "type": "EPContextBinaryGenerator", | ||
| "provider_options": { | ||
| "htp_performance_mode": "burst", | ||
| "htp_graph_finalization_optimization_mode": "3", | ||
| "soc_model": "60" | ||
| }, | ||
| "weight_sharing": true | ||
| }, | ||
| "cp": { "type": "ComposeOnnxModels" } | ||
| }, | ||
| "target": "qnn_system", | ||
| "log_severity_level": 1, | ||
| "output_dir": "models/gemma-3-4b-it-text", | ||
| "cache_dir": "cache", | ||
| "no_artifacts": true | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,53 @@ | ||
| { | ||
|
||
| "input_model": { | ||
| "type": "PyTorchModel", | ||
| "model_script": "custom_gemma3_4b_it_vision.py", | ||
| "model_loader": "load_gemma3_model", | ||
| "io_config": { | ||
| "input_names": [ "pixel_values" ], | ||
| "input_shapes": [ [ 1, 3, 896, 896 ] ], | ||
| "input_types": [ "float32" ], | ||
| "output_names": [ "image_features" ], | ||
| "output_shapes": [ [ 1, 256, 2560 ] ] | ||
| } | ||
| }, | ||
| "systems": { | ||
| "qnn_system": { | ||
| "type": "PythonEnvironment", | ||
| "python_environment_path": "/local/mnt2/workspace/kromero/olive/olive-venv/bin", | ||
| "accelerators": [ { "execution_providers": [ "QNNExecutionProvider" ] } ] | ||
| } | ||
| }, | ||
| "data_configs": [ | ||
| { | ||
| "name": "gemma_vision_data_config", | ||
| "user_script": "user_script.py", | ||
| "load_dataset_config": { "type": "gemma_vision_dataset", "model_id": "google/gemma-3-4b-it" } | ||
| } | ||
| ], | ||
| "passes": { | ||
| "conversion": { "type": "OnnxConversion", "target_opset": 20 }, | ||
| "surgery": { "type": "GraphSurgeries", "surgeries": [ { "surgeon": "MatMulAddToGemm" } ] }, | ||
| "quantization": { | ||
| "type": "OnnxStaticQuantization", | ||
| "quant_preprocess": true, | ||
| "data_config": "gemma_vision_data_config", | ||
| "activation_type": "uint16", | ||
| "precision": "uint8", | ||
| "calibrate_method": "MinMax" | ||
| }, | ||
| "cb": { | ||
| "type": "EPContextBinaryGenerator", | ||
| "provider_options": { | ||
| "htp_graph_finalization_optimization_mode": "3", | ||
| "offload_graph_io_quantization": "0" | ||
| } | ||
| }, | ||
| "add_metadata": { "type": "AddOliveMetadata", "graph_name": "gemma-3-4b-it-vision" } | ||
| }, | ||
| "target": "qnn_system", | ||
| "log_severity_level": 1, | ||
| "output_dir": "models/gemma-3-4b-it-vision", | ||
| "cache_dir": "cache", | ||
| "no_artifacts": true | ||
| } | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.