Skip to content

Exporting Gemma3 4B? #75

@shashwatsaini-aimonk

Description

@shashwatsaini-aimonk

I have a fine-tuned gemma3 4B model, and seeing how gemma is now supported, I am trying to run this:

from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForCausalLM
model_id = "merged_model/"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = ORTModelForCausalLM.from_pretrained(model_id, export=True)
model.save_pretrained('sql_onnx/')

But I get this error:
ValueError: Trying to export a gemma3 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type gemma3 to be supported natively in the ONNX export.

I am unsure of what a custom config is, or whether this should be supported out of the box. Thanks for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    model-additionRequires a PR adding support for the model

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions