Skip to content

Commit 3cf5a79

Browse files
authored
syncing changes to Keras + Gemma get started (google#293)
1 parent da5cfaf commit 3cf5a79

File tree

1 file changed

+23
-3
lines changed

1 file changed

+23
-3
lines changed

site/en/gemma/docs/get_started.ipynb

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -234,8 +234,24 @@
234234
"id": "XrAWvsU6pI0E"
235235
},
236236
"source": [
237-
"`from_preset` instantiates the model from a preset architecture and weights. In the code above, the string `\"gemma_2b_en\"` specifies the preset architecture: a Gemma model with 2 billion parameters. (A Gemma model with 7 billion parameters is also available. To run the larger model in Colab, you need access to the premium GPUs available in paid plans. Alternatively, you can perform [distributed tuning on a Gemma 7B model](https://ai.google.dev/gemma/docs/distributed_tuning) on Kaggle or Google Cloud.)\n",
238-
"\n",
237+
"`from_preset` instantiates the model from a preset architecture and weights. In the code above, the string `\"gemma_2b_en\"` specifies the preset architecture: a Gemma model with 2 billion parameters.\n"
238+
]
239+
},
240+
{
241+
"cell_type": "markdown",
242+
"metadata": {
243+
"id": "Ij73k0PfUhjE"
244+
},
245+
"source": [
246+
"Note: A Gemma model with 7 billion parameters is also available. To run the larger model in Colab, you need access to the premium GPUs available in paid plans. Alternatively, you can perform [distributed tuning on a Gemma 7B model](https://ai.google.dev/gemma/docs/distributed_tuning) on Kaggle or Google Cloud."
247+
]
248+
},
249+
{
250+
"cell_type": "markdown",
251+
"metadata": {
252+
"id": "E-cSEjULUhST"
253+
},
254+
"source": [
239255
"Use `summary` to get more info about the model:"
240256
]
241257
},
@@ -380,7 +396,9 @@
380396
"id": "81KHdRYOrWYm"
381397
},
382398
"source": [
383-
"As you can see from the summary, the model has 2.5 billion trainable parameters."
399+
"As you can see from the summary, the model has 2.5 billion trainable parameters.\n",
400+
"\n",
401+
"Note: For purposes of naming the model (\"2B\"), the embedding layer is not counted against the number of parameters."
384402
]
385403
},
386404
{
@@ -561,11 +579,13 @@
561579
"\n",
562580
"* Learn how to [finetune a Gemma model](https://ai.google.dev/gemma/docs/lora_tuning).\n",
563581
"* Learn how to perform [distributed fine-tuning and inference on a Gemma model](https://ai.google.dev/gemma/docs/distributed_tuning).\n",
582+
"* Learn about [Gemma integration with Vertex AI](https://ai.google.dev/gemma/docs/integrations/vertex)\n",
564583
"* Learn how to [use Gemma models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma)."
565584
]
566585
}
567586
],
568587
"metadata": {
588+
"accelerator": "GPU",
569589
"colab": {
570590
"name": "get_started.ipynb",
571591
"toc_visible": true

0 commit comments

Comments
 (0)