Skip to content

Commit 872795a

Browse files
committed
1 parent 64c1794 commit 872795a

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/functiongemma/finetuning-with-functiongemma.ipynb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -80,9 +80,9 @@
8080
"source": [
8181
"This guide demonstrates how to fine-tune FunctionGemma for tool calling.\n",
8282
"\n",
83-
"While FunctionGemma is natively capable of calling tools. But true capability comes from two distinct skills: the mechanical knowledge of how to use a tool (syntax) and the cognitive ability to interpret *why* and *when* to use it (intent).\n",
83+
"While FunctionGemma is natively capable of calling tools, true capability comes from two distinct skills: the mechanical knowledge of how to use a tool (syntax) and the cognitive ability to interpret *why* and *when* to use it (intent).\n",
8484
"\n",
85-
"Models, especially smaller ones, have fewer parameters available to retain complex intent understanding. This is why we need to fine-tune them\n",
85+
"Models, especially smaller ones, have fewer parameters available to retain complex intent understanding. This is why we need to fine-tune them.\n",
8686
"\n",
8787
"Common use cases for fine-tuning tool calling include:\n",
8888
"\n",
@@ -134,7 +134,7 @@
134134
"id": "3raKuRFXEDNm"
135135
},
136136
"source": [
137-
"> _Note: If you are using a GPU with Ampere architecture (such as NVIDIA L4) or newer, you can use Flash attention. Flash Attention is a method that significantly speeds computations up and reduces memory usage from quadratic to linear in sequence length, leading to acelerating training up to 3x. Learn more at [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/main)._\n",
137+
"> _Note: If you are using a GPU with Ampere architecture (such as NVIDIA L4) or newer, you can use Flash attention. Flash Attention is a method that significantly speeds computations up and reduces memory usage from quadratic to linear in sequence length, leading to accelerating training up to 3x. Learn more at [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/main)._\n",
138138
"\n",
139139
"Before you can start training, you have to make sure that you accepted the terms of use for Gemma. You can accept the license on [Hugging Face](http://huggingface.co/google/functiongemma-270m-it) by clicking on the **Agree** and access repository button on the model page at: http://huggingface.co/google/functiongemma-270m-it\n",
140140
"\n",
@@ -924,7 +924,7 @@
924924
"source": [
925925
"To plot the training and validation losses, you would typically extract these values from the `TrainerState` object or the logs generated during training.\n",
926926
"\n",
927-
"Libraries like Matplotlib can then be used to visualize these values over training steps or epochs. The x-asis would represent the training steps or epochs, and the y-axis would represent the corresponding loss values."
927+
"Libraries like Matplotlib can then be used to visualize these values over training steps or epochs. The x-axis would represent the training steps or epochs, and the y-axis would represent the corresponding loss values."
928928
]
929929
},
930930
{
@@ -1059,7 +1059,7 @@
10591059
}
10601060
],
10611061
"source": [
1062-
"check_success_rate()\n"
1062+
"check_success_rate()"
10631063
]
10641064
},
10651065
{
@@ -1087,7 +1087,7 @@
10871087
"Check out the following docs next:\n",
10881088
"\n",
10891089
"- [Full function calling sequence with FunctionGemma](https://ai.google.dev/gemma/docs/functiongemma/full-function-calling-sequence-with-functiongemma)\n",
1090-
"- [Finetune FunctionGemma for Mobile Actions](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb) in the Gemma Cookbook\n"
1090+
"- [Fine-tune FunctionGemma for Mobile Actions](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb) in the Gemma Cookbook\n"
10911091
]
10921092
}
10931093
],

0 commit comments

Comments
 (0)