Run the following command:
python -m pip install -r requirements.txtRun the following command:
python -m olive capture-onnx-graph -m google/gemma-3-270m-it --use_model_builder -o output_modelThe exported ONNX model is saved in output_model folder.
To run the ONNX GenAI model, please set up the latest ONNXRuntime GenAI.
The sample chat app to run is found as model-chat.py in the onnxruntime-genai Github repository.
python -m pip install -r requirements.txt
# Use the following command to export the model using Olive with CPUExecutionProvider at FP32 precision:
olive run --config gemma-3-1b-it_model_builder_cpu_fp32.json