-
Notifications
You must be signed in to change notification settings - Fork 274
Gemma3-4b QNN example fixes #2106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
qti-kromero
wants to merge
26
commits into
microsoft:main
Choose a base branch
from
CodeLinaro:dev/qti-kromero/gemma3
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 9 commits
Commits
Show all changes
26 commits
Select commit
Hold shift + click to select a range
d494c82
Initial commit
qti-kromero ddf3ea8
Add README and start config
qti-kromero 1f54074
QuaRot passing, working on GptqQuantizer
qti-kromero 6cae95f
Work on dataset integration
qti-kromero 2d0872e
Data processing works
qti-kromero 6a6f67d
Fix lint issues and cleanup
qti-kromero cd24ddf
Adding vision resources
qti-kromero 636e982
Add Gemma3 vision configurations
qti-kromero b4ea7a3
Fix linting error
qti-kromero 1f69af3
Vision model onnx conversion working
qti-kromero aed20ec
Enable quant on text model
qti-kromero ba0633c
Improve README
qti-kromero 5ad910d
Merge remote-tracking branch 'origin/main' into dev/qti-kromero/gemma3
qti-kromero acbdfdc
Add files from Prudvhi
qti-kromero f7178ae
Updates
qti-kromero bd70ff4
Updates
qti-kromero c962cee
Add olive requirements file
prudhvi-qti 360d9c2
update
qti-kromero 5fcda5c
Update Olive scripts for gemma3
prudhvi-qti 14018ee
Update few python packages
prudhvi-qti 1f89241
Use the same llava dataset for text model as well
prudhvi-qti 7d4ced8
Minor cleanup
qti-kromero a0bd703
Add system requirements
prudhvi-qti f712bdc
Merge remote-tracking branch 'origin/main' into dev/qti-kromero/gemma3
qti-kromero f685073
Remove examples
qti-kromero 5dff155
Fix review comments
qti-kromero File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,29 @@ | ||
| # Gemma-3-4B Model Optimization | ||
|
|
||
| This repository demonstrates the optimization of the [Google Gemma-3-4B](https://huggingface.co/google/gemma-3-4b-it) model using **post-training quantization (PTQ)** techniques. The optimization process utilizes an environment based heavily upon the [PTQ tutorial for Phi-3.5](https://github.com/CodeLinaro/Olive/blob/main/examples/phi3_5/README.md) | ||
|
|
||
| ## Automated Setup (Linux Only) | ||
|
|
||
| Requirements: | ||
| * Python 3.10 | ||
| * uv - Used throughout the setup scripts, please follow the [publically available installation instructions](https://docs.astral.sh/uv/getting-started/installation/#installation-methods) | ||
|
|
||
| This repository contains an automated setup script for Linux that can be used to help automate many of the steps listed in the tutorial above: | ||
|
|
||
| ```bash | ||
| source env_setup.sh | ||
| ``` | ||
|
|
||
| ## Optimization Process | ||
|
|
||
| Since Gemma-3-4B is a multi-modal model composed of both vision and text components, the strategy for optimizing it through Olive is to operate on the constituent models separately before configuring them to work in concert at the onnxruntime-genai stage. | ||
|
|
||
| Thus, the following commands should be used to separately produce context binaries for the text and vision portions of the model, respectively. | ||
|
|
||
| ```bash | ||
| olive run --config gemma3-4b-text-qnn-config.json | ||
| ``` | ||
|
|
||
| ```bash | ||
| olive run --config gemma3-4b-vision-qnn-config.json | ||
| ``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| # ------------------------------------------------------------------------- | ||
| # Copyright (c) Microsoft Corporation. All rights reserved. | ||
| # Licensed under the MIT License. | ||
| # -------------------------------------------------------------------------- | ||
|
|
||
|
|
||
| import torch | ||
| from transformers import AutoModel | ||
|
|
||
|
|
||
| def load_gemma3_model(model_path): | ||
| return AutoModel.from_pretrained("google/gemma-3-4b-it") | ||
|
|
||
|
|
||
| def get_dummy_inputs(model_handler): | ||
| return { | ||
| "input_ids": torch.full((1, 256), 262144, dtype=torch.long), # Image token ID | ||
| "pixel_values": torch.randn(1, 3, 896, 896, dtype=torch.float32), | ||
| "attention_mask": torch.ones((1, 256), dtype=torch.long), | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
|
|
||
|
Check failure on line 1 in examples/gemma3/qnn/env_setup.sh
|
||
| # Installing setuptools to build Olive from source | ||
| uv pip install setuptools | ||
|
|
||
| # Requires installation of uv | ||
| uv pip install -r ../requirements.txt | ||
|
|
||
| # Require installation of Olive dependencies | ||
| uv pip install -r ../../../requirements.txt | ||
|
|
||
| # Disable CUDA extension build | ||
| export BUILD_CUDA_EXT=0 | ||
|
|
||
| # Install AutoGPTQ from source | ||
| uv pip install --no-build-isolation git+https://github.com/PanQiWei/AutoGPTQ.git | ||
|
|
||
| # Install GptqModel from source | ||
| uv pip install --no-build-isolation git+https://github.com/ModelCloud/GPTQModel.git@5d2911a4b2a709afb0941d53c3882d0cd80b9649 | ||
|
|
||
| # Install onnxruntime-qnn without installing onnxruntime | ||
| # Note: Installing both at the same time may cause conflicts | ||
| uv pip install -r https://raw.githubusercontent.com/microsoft/onnxruntime/refs/heads/main/requirements.txt | ||
| uv pip install -U --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple onnxruntime-qnn --no-deps | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,80 @@ | ||
| { | ||
|
||
| "input_model": { "type": "HfModel", "model_path": "google/gemma-3-4b-it" }, | ||
| "systems": { | ||
| "qnn_system": { | ||
| "type": "PythonEnvironment", | ||
| "python_environment_path": "/local/mnt2/workspace/kromero/olive/olive-venv/bin", | ||
| "accelerators": [ { "execution_providers": [ "QNNExecutionProvider" ] } ] | ||
| } | ||
| }, | ||
| "data_configs": [ | ||
| { | ||
| "name": "gemma_text_data_config", | ||
qti-kromero marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| "user_script": "user_script.py", | ||
| "load_dataset_config": { "type": "gemma_text_dataset", "model_id": "google/gemma-3-4b-it" } | ||
| } | ||
| ], | ||
| "passes": { | ||
| "q": { "type": "QuaRot" }, | ||
| "g": { | ||
qti-kromero marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| "type": "GptqModel", | ||
| "bits": 4, | ||
| "sym": true, | ||
| "group_size": -1, | ||
| "lm_head": false, | ||
| "device": "cuda", | ||
| "data_config": "gemma_text_data_config" | ||
| }, | ||
| "cs": { "type": "CaptureSplitInfo", "num_splits": 4, "unique_embeds_lm_head_splits": true }, | ||
| "mb": { | ||
| "type": "ModelBuilder", | ||
| "precision": "int4", | ||
| "int4_block_size": 32, | ||
| "int4_accuracy_level": 4, | ||
| "int4_op_types_to_quantize": [ "MatMul", "Gather" ] | ||
| }, | ||
| "mq": { | ||
| "type": "MatMulNBitsToQDQ", | ||
| "use_int4": true, | ||
| "add_zero_point": true, | ||
| "nodes_to_exclude": [ "/lm_head/MatMul_Q4" ], | ||
| "save_as_external_data": true | ||
| }, | ||
| "gs": { | ||
| "type": "GraphSurgeries", | ||
| "surgeries": [ | ||
| { "surgeon": "RemoveRopeMultiCache" }, | ||
| { "surgeon": "AttentionMaskToSequenceLengths" }, | ||
| { "surgeon": "SimplifiedLayerNormToL2Norm" } | ||
| ], | ||
| "save_as_external_data": true | ||
| }, | ||
| "sq": { | ||
| "type": "OnnxStaticQuantization", | ||
| "data_config": "gemma_text_data_config", | ||
| "activation_type": "uint16", | ||
| "precision": "uint8", | ||
| "calibration_providers": [ "CUDAExecutionProvider" ], | ||
| "quant_preprocess": true, | ||
| "op_types_to_exclude": [ "GatherBlockQuantized", "GroupQueryAttention", "MatMulNBits" ], | ||
| "save_as_external_data": true | ||
| }, | ||
| "sp": { "type": "SplitModel" }, | ||
| "st": { "type": "StaticLLM", "batch_size": 1, "context_length": 64 }, | ||
| "cb": { | ||
| "type": "EPContextBinaryGenerator", | ||
| "provider_options": { | ||
| "htp_performance_mode": "burst", | ||
| "htp_graph_finalization_optimization_mode": "3", | ||
| "soc_model": "60" | ||
| }, | ||
| "weight_sharing": true | ||
| }, | ||
| "cp": { "type": "ComposeOnnxModels" } | ||
| }, | ||
| "target": "qnn_system", | ||
| "log_severity_level": 1, | ||
| "output_dir": "models/gemma-3-4b-it-text", | ||
| "cache_dir": "cache", | ||
| "no_artifacts": true | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,47 @@ | ||
| { | ||
|
||
| "input_model": { | ||
| "type": "PyTorchModel", | ||
| "model_script": "custom_gemma3_4b_it_vision.py", | ||
| "model_loader": "load_gemma3_model", | ||
| "dummy_inputs_func": "get_dummy_inputs", | ||
| "io_config": { | ||
| "input_names": [ "input_ids", "pixel_values", "attention_mask" ], | ||
| "input_shapes": [ [ 1, 256 ], [ 1, 3, 896, 896 ], [ 1, 256 ] ], | ||
| "input_types": [ "int64", "float32", "int64" ], | ||
| "output_names": [ "last_hidden_state" ], | ||
| "output_shapes": [ [ 1, 256, 2560 ] ] | ||
| } | ||
| }, | ||
| "systems": { | ||
| "qnn_system": { | ||
| "type": "PythonEnvironment", | ||
| "python_environment_path": "/local/mnt2/workspace/kromero/olive/olive-venv/bin", | ||
| "accelerators": [ { "execution_providers": [ "QNNExecutionProvider" ] } ] | ||
| } | ||
| }, | ||
| "data_configs": [ | ||
| { | ||
| "name": "gemma_vision_data_config", | ||
| "user_script": "user_script.py", | ||
| "load_dataset_config": { "type": "gemma_vision_dataset", "model_id": "google/gemma-3-4b-it" } | ||
| } | ||
| ], | ||
| "passes": { | ||
| "conversion": { "type": "OnnxConversion", "target_opset": 17 }, | ||
| "quantization": { | ||
| "type": "OnnxStaticQuantization", | ||
| "quant_preprocess": true, | ||
| "data_config": "gemma_vision_data_config", | ||
| "op_types_to_quantize": [ "MatMul", "LayerNormalization", "Gemm", "Sigmoid", "Gelu" ], | ||
| "activation_type": "uint16", | ||
| "precision": "uint8", | ||
| "calibrate_method": "MinMax" | ||
| }, | ||
| "add_metadata": { "type": "AddOliveMetadata", "graph_name": "gemma-3-4b-it-vision" } | ||
| }, | ||
| "target": "qnn_system", | ||
| "log_severity_level": 1, | ||
| "output_dir": "models/gemma-3-4b-it-vision", | ||
| "cache_dir": "cache", | ||
| "no_artifacts": true | ||
| } | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.