Skip to content
Open
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
d494c82
Initial commit
qti-kromero Aug 13, 2025
ddf3ea8
Add README and start config
qti-kromero Aug 13, 2025
1f54074
QuaRot passing, working on GptqQuantizer
qti-kromero Aug 14, 2025
6cae95f
Work on dataset integration
qti-kromero Aug 15, 2025
2d0872e
Data processing works
qti-kromero Aug 15, 2025
6a6f67d
Fix lint issues and cleanup
qti-kromero Aug 15, 2025
cd24ddf
Adding vision resources
qti-kromero Aug 18, 2025
636e982
Add Gemma3 vision configurations
qti-kromero Aug 19, 2025
b4ea7a3
Fix linting error
qti-kromero Aug 19, 2025
1f69af3
Vision model onnx conversion working
qti-kromero Aug 19, 2025
aed20ec
Enable quant on text model
qti-kromero Aug 20, 2025
ba0633c
Improve README
qti-kromero Aug 26, 2025
5ad910d
Merge remote-tracking branch 'origin/main' into dev/qti-kromero/gemma3
qti-kromero Aug 28, 2025
acbdfdc
Add files from Prudvhi
qti-kromero Aug 28, 2025
f7178ae
Updates
qti-kromero Sep 2, 2025
bd70ff4
Updates
qti-kromero Sep 3, 2025
c962cee
Add olive requirements file
prudhvi-qti Sep 4, 2025
360d9c2
update
qti-kromero Sep 4, 2025
5fcda5c
Update Olive scripts for gemma3
prudhvi-qti Sep 4, 2025
14018ee
Update few python packages
prudhvi-qti Sep 5, 2025
1f89241
Use the same llava dataset for text model as well
prudhvi-qti Sep 8, 2025
7d4ced8
Minor cleanup
qti-kromero Sep 9, 2025
a0bd703
Add system requirements
prudhvi-qti Sep 9, 2025
f712bdc
Merge remote-tracking branch 'origin/main' into dev/qti-kromero/gemma3
qti-kromero Sep 18, 2025
f685073
Remove examples
qti-kromero Sep 18, 2025
5dff155
Fix review comments
qti-kromero Sep 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
122 changes: 122 additions & 0 deletions examples/gemma3/qnn/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Gemma-3-4B Model Optimization

This repository demonstrates the optimization of the [Google Gemma-3-4B](https://huggingface.co/google/gemma-3-4b-it) model using **post-training quantization (PTQ)** techniques for QNN (Qualcomm Neural Network) execution. The optimization process utilizes an environment based heavily upon the [PTQ tutorial for Phi-3.5](https://github.com/CodeLinaro/Olive/blob/main/examples/phi3_5/README.md)

## File Overview

This example contains the following key files:

- **`env_setup.sh`** - Automated environment setup script (Linux only)
- **`gemma3-4b-text-qnn-config.json`** - Olive configuration for optimizing the text component
- **`gemma3-4b-vision-qnn-config.json`** - Olive configuration for optimizing the vision component
- **`user_script.py`** - Dataset handling and preprocessing utilities
- **`custom_gemma3_4b_it_vision.py`** - Vision model loader for the optimization pipeline

## Prerequisites

### System Requirements
- **Operating System**: Linux (automated setup script is Linux-only)
- **Python**: 3.10
- **Package Manager**: [uv](https://docs.astral.sh/uv/getting-started/installation/#installation-methods)
- **Storage**: ~13GB for COCO train2017 dataset (downloaded automatically)

### Dependencies Installed by Setup Script
The `env_setup.sh` script installs the following components:
- setuptools (for building Olive from source)
- Olive requirements and dependencies
- AutoGPTQ (from source)
- GPTQModel (specific commit: `558449bed3ef2653c36041650d30da6bbbca440d`)
- onnxruntime-qnn (pre-release version)

## Setup Instructions

### Automated Setup (Recommended)
```bash
source env_setup.sh
```

### Manual Setup (Alternative)
If you prefer to set up manually or need to troubleshoot:

1. Install setuptools:
```bash
uv pip install setuptools
```

2. Install requirements:
```bash
uv pip install -r ../requirements.txt
uv pip install -r ../../../requirements.txt
```

3. Install AutoGPTQ from source:
```bash
export BUILD_CUDA_EXT=0
uv pip install --no-build-isolation git+https://github.com/PanQiWei/AutoGPTQ.git
```

4. Install GPTQModel with Gemma3 fix:
```bash
uv pip install --no-build-isolation git+https://github.com/ModelCloud/GPTQModel.git@558449bed3ef2653c36041650d30da6bbbca440d
```

5. Install onnxruntime-qnn:
```bash
uv pip install -r https://raw.githubusercontent.com/microsoft/onnxruntime/refs/heads/main/requirements.txt
uv pip install -U --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple onnxruntime-qnn --no-deps
```

> **Important:** The setup uses a specific commit hash for GPTQModel (`558449bed3ef2653c36041650d30da6bbbca440d`) to address a [memory leak issue](https://github.com/ModelCloud/GPTQModel/commit/558449bed3ef2653c36041650d30da6bbbca440d) with Gemma3 models.

## Optimization Process

Since Gemma-3-4B is a multi-modal model composed of both vision and text components, the strategy for optimizing it through Olive is to operate on the constituent models separately before configuring them to work together at the onnxruntime-genai stage.

### Configuration Differences

**Text Configuration (`gemma3-4b-text-qnn-config.json`)**:
- Uses HuggingFace model directly (`google/gemma-3-4b-it`)
- Applies comprehensive optimization pipeline: QuaRot → GptqModel → ModelBuilder → Quantization
- Outputs to: `models/gemma-3-4b-it-text/`

**Vision Configuration (`gemma3-4b-vision-qnn-config.json`)**:
- Uses custom PyTorch model loader (`custom_gemma3_4b_it_vision.py`)
- Simpler pipeline: ONNX Conversion → Graph Surgery → Quantization
- Outputs to: `models/gemma-3-4b-it-vision/`

### Running Optimization

Execute the following commands to separately produce optimized binaries for each component:

```bash
olive run --config gemma3-4b-text-qnn-config.json
```

```bash
olive run --config gemma3-4b-vision-qnn-config.json
```

## Expected Outputs

After successful optimization, you will find:

- **Text model outputs**: `models/gemma-3-4b-it-text/`
- **Vision model outputs**: `models/gemma-3-4b-it-vision/`
- **Cache directory**: `cache/` (intermediate files and downloaded datasets)
- **Dataset**: `.cache/train2017/` (COCO train2017 images, ~13GB)

Both configurations use `"no_artifacts": true`, meaning only the final optimized models are retained.

## Troubleshooting

### Common Issues

**Insufficient Storage**: The COCO train2017 dataset requires ~13GB of storage and is downloaded automatically to `.cache/train2017/`.

**Memory Requirements**: The optimization process, particularly for the text model with its comprehensive pipeline, requires substantial memory.

**QNN Provider**: Ensure the QNNExecutionProvider is properly installed and configured in your environment.

**Platform Limitation**: The current setup script is designed for Linux only. Windows/macOS users will need to adapt the manual setup steps.

**Dataset Download**: If the COCO dataset download fails, check your internet connection and available storage. The script uses `wget` which must be available on your system.
36 changes: 36 additions & 0 deletions examples/gemma3/qnn/custom_gemma3_4b_it_vision.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------------------


import logging

import torch
from transformers import AutoModel

logger = logging.getLogger(__name__)


class Gemma3VisualEmbeddingGenerator(torch.nn.Module):
def __init__(self, full_model):
super().__init__()
# Extract only the vision components
self.vision_tower = full_model.vision_tower
self.multi_modal_projector = full_model.multi_modal_projector

def forward(self, pixel_values):
# Process images through vision tower
image_outputs = self.vision_tower(pixel_values, output_hidden_states=True)
selected_image_feature = image_outputs.last_hidden_state
# Project to final embedding space
return self.multi_modal_projector(selected_image_feature)


def load_gemma3_model(model_path):
full_model = AutoModel.from_pretrained("google/gemma-3-4b-it")
logger.info("Loaded full model: %s", full_model)

vision_model = Gemma3VisualEmbeddingGenerator(full_model)
logger.info("Created vision-only model: %s", vision_model)
return vision_model
28 changes: 28 additions & 0 deletions examples/gemma3/qnn/env_setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# --------------------------------------------------------------------------

# Installing setuptools to build Olive from source
uv pip install setuptools

# Requires installation of uv
uv pip install -r ../requirements.txt

# Require installation of Olive dependencies
uv pip install -r ../../../requirements.txt

# Disable CUDA extension build
export BUILD_CUDA_EXT=0

# Install AutoGPTQ from source
uv pip install --no-build-isolation git+https://github.com/PanQiWei/AutoGPTQ.git

# Install GptqModel from source
# Note: Commit hash corresponds to commit which fixes Gemma 3 memory leak issue. See README.md for additional details.
uv pip install --no-build-isolation git+https://github.com/ModelCloud/GPTQModel.git@558449bed3ef2653c36041650d30da6bbbca440d

# Install onnxruntime-qnn without installing onnxruntime
uv pip install -r https://raw.githubusercontent.com/microsoft/onnxruntime/refs/heads/main/requirements.txt
uv pip install -U --pre --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple onnxruntime-qnn --no-deps
80 changes: 80 additions & 0 deletions examples/gemma3/qnn/gemma3-4b-text-qnn-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
{
"input_model": { "type": "HfModel", "model_path": "google/gemma-3-4b-it" },
"systems": {
"qnn_system": {
"type": "PythonEnvironment",
"python_environment_path": "/local/mnt2/workspace/kromero/olive/olive-venv/bin",
"accelerators": [ { "execution_providers": [ "QNNExecutionProvider" ] } ]
}
},
"data_configs": [
{
"name": "gemma_text_data_config",
"user_script": "user_script.py",
"load_dataset_config": { "type": "gemma_text_dataset", "model_id": "google/gemma-3-4b-it" }
}
],
"passes": {
"q": { "type": "QuaRot", "device": "cpu" },
"g": {
"type": "GptqModel",
"bits": 4,
"sym": true,
"group_size": -1,
"lm_head": false,
"device": "cuda",
"data_config": "gemma_text_data_config"
},
"cs": { "type": "CaptureSplitInfo", "num_splits": 4, "unique_embeds_lm_head_splits": true },
"mb": {
"type": "ModelBuilder",
"precision": "int4",
"int4_block_size": 32,
"int4_accuracy_level": 4,
"int4_op_types_to_quantize": [ "MatMul", "Gather" ]
},
"mq": {
"type": "MatMulNBitsToQDQ",
"use_int4": true,
"add_zero_point": true,
"nodes_to_exclude": [ "/lm_head/MatMul_Q4" ],
"save_as_external_data": true
},
"gs": {
"type": "GraphSurgeries",
"surgeries": [
{ "surgeon": "RemoveRopeMultiCache" },
{ "surgeon": "AttentionMaskToSequenceLengths" },
{ "surgeon": "SimplifiedLayerNormToL2Norm" }
],
"save_as_external_data": true
},
"sq": {
"type": "OnnxStaticQuantization",
"data_config": "gemma_text_data_config",
"activation_type": "uint16",
"precision": "uint8",
"calibration_providers": [ "CUDAExecutionProvider" ],
"quant_preprocess": true,
"op_types_to_exclude": [ "GatherBlockQuantized", "GroupQueryAttention", "MatMulNBits" ],
"save_as_external_data": true
},
"sp": { "type": "SplitModel" },
"st": { "type": "StaticLLM", "batch_size": 1, "context_length": 64 },
"cb": {
"type": "EPContextBinaryGenerator",
"provider_options": {
"htp_performance_mode": "burst",
"htp_graph_finalization_optimization_mode": "3",
"soc_model": "60"
},
"weight_sharing": true
},
"cp": { "type": "ComposeOnnxModels" }
},
"target": "qnn_system",
"log_severity_level": 1,
"output_dir": "models/gemma-3-4b-it-text",
"cache_dir": "cache",
"no_artifacts": true
}
53 changes: 53 additions & 0 deletions examples/gemma3/qnn/gemma3-4b-vision-qnn-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
{
"input_model": {
"type": "PyTorchModel",
"model_script": "custom_gemma3_4b_it_vision.py",
"model_loader": "load_gemma3_model",
"io_config": {
"input_names": [ "pixel_values" ],
"input_shapes": [ [ 1, 3, 896, 896 ] ],
"input_types": [ "float32" ],
"output_names": [ "image_features" ],
"output_shapes": [ [ 1, 256, 2560 ] ]
}
},
"systems": {
"qnn_system": {
"type": "PythonEnvironment",
"python_environment_path": "/local/mnt2/workspace/kromero/olive/olive-venv/bin",
"accelerators": [ { "execution_providers": [ "QNNExecutionProvider" ] } ]
}
},
"data_configs": [
{
"name": "gemma_vision_data_config",
"user_script": "user_script.py",
"load_dataset_config": { "type": "gemma_vision_dataset", "model_id": "google/gemma-3-4b-it" }
}
],
"passes": {
"conversion": { "type": "OnnxConversion", "target_opset": 20 },
"surgery": { "type": "GraphSurgeries", "surgeries": [ { "surgeon": "MatMulAddToGemm" } ] },
"quantization": {
"type": "OnnxStaticQuantization",
"quant_preprocess": true,
"data_config": "gemma_vision_data_config",
"activation_type": "uint16",
"precision": "uint8",
"calibrate_method": "MinMax"
},
"cb": {
"type": "EPContextBinaryGenerator",
"provider_options": {
"htp_graph_finalization_optimization_mode": "3",
"offload_graph_io_quantization": "0"
}
},
"add_metadata": { "type": "AddOliveMetadata", "graph_name": "gemma-3-4b-it-vision" }
},
"target": "qnn_system",
"log_severity_level": 1,
"output_dir": "models/gemma-3-4b-it-vision",
"cache_dir": "cache",
"no_artifacts": true
}
Loading