Skip to content

Gemma 3 support #49

@ghost

Description

@ghost

System Info

Windows 11 x86-64

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

conda create -n gemma-convert python=3.10
conda activate gemma-convert
pip install git+https://github.com/huggingface/accelerate.git
pip install git+https://github.com/huggingface/transformers.git
pip install git+https://github.com/huggingface/optimum.git
optimum-cli export onnx --model gemma-3-transformers-gemma-3-27b-it-v1 --task text-generation gemma-onnx/

result:

Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 12/12 [02:57<00:00, 14.79s/it]
Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.
Traceback (most recent call last):
File "C:\Users\codrut\miniconda3\envs\gemma-convert\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\codrut\miniconda3\envs\gemma-convert\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\codrut\miniconda3\envs\gemma-convert\Scripts\optimum-cli.exe_main
.py", line 7, in
File "C:\Users\codrut\miniconda3\envs\gemma-convert\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
service.run()
File "C:\Users\codrut\miniconda3\envs\gemma-convert\lib\site-packages\optimum\commands\export\onnx.py", line 276, in run
main_export(
File "C:\Users\codrut\miniconda3\envs\gemma-convert\lib\site-packages\optimum\exporters\onnx_main
.py", line 414, in main_export
onnx_export_from_model(
File "C:\Users\codrut\miniconda3\envs\gemma-convert\lib\site-packages\optimum\exporters\onnx\convert.py", line 1039, in onnx_export_from_model
raise ValueError(
ValueError: Trying to export a gemma3 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type gemma3 to be supported natively in the ONNX export.

Expected behavior

Conversion to onnx is succesfull

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions