-
Notifications
You must be signed in to change notification settings - Fork 25
Description
I have a fine-tuned gemma3 4B model, and seeing how gemma is now supported, I am trying to run this:
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForCausalLM
model_id = "merged_model/"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = ORTModelForCausalLM.from_pretrained(model_id, export=True)
model.save_pretrained('sql_onnx/')
But I get this error:
ValueError: Trying to export a gemma3 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type gemma3 to be supported natively in the ONNX export.
I am unsure of what a custom config is, or whether this should be supported out of the box. Thanks for your help!