Finetune Google FunctionGemma in NeMo Automodel #987
HuiyingLi
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
FunctionGemma is a lightweight, 270M-parameter variant built on the Gemma 3 architecture with a function-calling chat format. It is intended to be fine-tuned for task-specific function calling, and its compact size makes it practical for edge or resource-constrained deployments.
Data
We use xLAM, a function-calling dataset containing user queries, tool schemas, and tool call traces, to showcase how to finetune FunctionGemma in NeMo Automodel. An example data entry is shown below:
{ "id": 123, "query": "Book me a table for two at 7pm in Seattle.", "tools": [ { "name": "book_table", "description": "Book a restaurant table", "parameters": { "party_size": {"type": "int"}, "time": {"type": "string"}, "city": {"type": "string"} } } ], "answers": [ { "name": "book_table", "arguments": "{\"party_size\":2,\"time\":\"19:00\",\"city\":\"Seattle\"}" } ] }We provide a helper
make_xlam_datasetconverts each xLAM row into OpenAI-style tool schemas and tool calls.Run SFT and PEFT
Use the following command to finetune FunctionGemma on xLAM. Adjust number of GPUs to your needs.
To apply LoRA (PEFT), uncomment the
peftblock in the recipe and tune rank/alpha/targets per the SFT/PEFT guide. Example override:For more details, checkout our guide!
Beta Was this translation helpful? Give feedback.
All reactions