Skip to content

Commit 7868875

Browse files
authored
Nb updates (#556)
* moving, updating, and deleting notebooks * update, move, add and delete notebooks
1 parent dde2d87 commit 7868875

File tree

7 files changed

+632
-2783
lines changed

7 files changed

+632
-2783
lines changed

site/en/gemma/docs/distributed_tuning.ipynb renamed to site/en/gemma/docs/core/distributed_tuning.ipynb

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
"id": "Tce3stUlHN0L"
1717
},
1818
"source": [
19-
"##### Copyright 2024 Google LLC."
19+
"##### Copyright 2025 Google LLC."
2020
]
2121
},
2222
{
@@ -49,19 +49,19 @@
4949
"source": [
5050
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
5151
" <td>\n",
52-
" <a target=\"_blank\" href=\"https://ai.google.dev/gemma/docs/distributed_tuning\"><img src=\"https://ai.google.dev/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on ai.google.dev</a>\n",
52+
" <a target=\"_blank\" href=\"https://ai.google.dev/gemma/docs/core/distributed_tuning\"><img src=\"https://ai.google.dev/static/site-assets/images/docs/notebook-site-button.png\" height=\"32\" width=\"32\" />View on ai.google.dev</a>\n",
5353
" </td>\n",
5454
" <td>\n",
5555
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/googlecolab/colabtools/blob/main/notebooks/Gemma_Distributed_Fine_tuning_on_TPU.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
5656
" </td>\n",
5757
" <td>\n",
58-
" <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/google/generative-ai-docs/blob/main/site/en/gemma/docs/distributed_tuning.ipynb\"><img src=\"https://www.kaggle.com/static/images/logos/kaggle-logo-transparent-300.png\" height=\"32\" width=\"70\"/>Run in Kaggle</a>\n",
58+
" <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/google/generative-ai-docs/blob/main/site/en/gemma/docs/core/distributed_tuning.ipynb\"><img src=\"https://www.kaggle.com/static/images/logos/kaggle-logo-transparent-300.png\" height=\"32\" width=\"70\"/>Run in Kaggle</a>\n",
5959
" </td>\n",
6060
" <td>\n",
61-
" <a target=\"_blank\" href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/google/generative-ai-docs/main/site/en/gemma/docs/distributed_tuning.ipynb\"><img src=\"https://ai.google.dev/images/cloud-icon.svg\" width=\"40\" />Open in Vertex AI</a>\n",
61+
" <a target=\"_blank\" href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/google/generative-ai-docs/main/site/en/gemma/docs/core/distributed_tuning.ipynb\"><img src=\"https://ai.google.dev/images/cloud-icon.svg\" width=\"40\" />Open in Vertex AI</a>\n",
6262
" </td>\n",
6363
" <td>\n",
64-
" <a target=\"_blank\" href=\"https://github.com/google/generative-ai-docs/blob/main/site/en/gemma/docs/distributed_tuning.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
64+
" <a target=\"_blank\" href=\"https://github.com/google/generative-ai-docs/blob/main/site/en/gemma/docs/core/distributed_tuning.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
6565
" </td>\n",
6666
"</table>"
6767
]
@@ -81,10 +81,8 @@
8181
"id": "Tdlq6K0znh3O"
8282
},
8383
"source": [
84-
"## Overview\n",
85-
"\n",
8684
"Gemma is a family of lightweight, state-of-the-art open models built from research and technology used to create Google Gemini models. Gemma can be further finetuned to suit specific needs. But Large Language Models, such as Gemma, can be very large in size and some of them may not fit on a sing accelerator for finetuning. In this case there are two general approaches for finetuning them:\n",
87-
"1. Parameter Efficient Fine-Tuning (PEFT), which seeks to shrink the effective model size by sacrificing some fidelity. LoRA falls in this category and the [Fine-tune Gemma models in Keras using LoRA](https://ai.google.dev/gemma/docs/lora_tuning) tutorial demonstrates how to finetune the Gemma 2B model `gemma_2b_en` with LoRA using KerasNLP on a single GPU.\n",
85+
"1. Parameter Efficient Fine-Tuning (PEFT), which seeks to shrink the effective model size by sacrificing some fidelity. LoRA falls in this category and the [Fine-tune Gemma models in Keras using LoRA](https://ai.google.dev/gemma/docs/core/lora_tuning) tutorial demonstrates how to finetune the Gemma 2B model `gemma_2b_en` with LoRA using KerasNLP on a single GPU.\n",
8886
"2. Full parameter finetuning with model parallelism. Model parallelism distributes a single model's weights across multiple devices and enables horizontal scaling. You can find out more about distributed training in this [Keras guide](https://keras.io/guides/distribution/).\n",
8987
"\n",
9088
"This tutorial walks you through using Keras with a JAX backend to finetune the Gemma 7B model with LoRA and model-parallism distributed training on Google's Tensor Processing Unit (TPU). Note that LoRA can be turned off in this tutorial for a slower but more accurate full-parameter tuning."
@@ -105,7 +103,7 @@
105103
"Google has 3 products that provide TPUs:\n",
106104
"* [Colab](https://colab.sandbox.google.com/) provides TPU v2 for free, which is sufficient for this tutorial.\n",
107105
"* [Kaggle](https://www.kaggle.com/) offers TPU v3 for free and they also work for this tutorial.\n",
108-
"* [Cloud TPU](https://cloud.google.com/tpu?hl=en) offers TPU v3 and newer generations. One way to set it up is:\n",
106+
"* [Cloud TPU](https://cloud.google.com/tpu) offers TPU v3 and newer generations. One way to set it up is:\n",
109107
" 1. Create a new [TPU VM](https://cloud.google.com/tpu/docs/managing-tpus-tpu-vm#tpu-vms)\n",
110108
" 2. Set up [SSH port forwarding](https://cloud.google.com/solutions/connecting-securely#port-forwarding-over-ssh) for your intended Jupyter server port\n",
111109
" 3. Install Jupyter and start it on the TPU VM, then connect to Colab through \"Connect to a local runtime\"\n",
@@ -963,7 +961,7 @@
963961
"In this tutorial, you learned how to using KerasNLP JAX backend to finetune a Gemma model on the IMDb dataset in a distributed manner on the powerful TPUs. Here are a few suggestions for what else to learn:\n",
964962
"\n",
965963
"* Learn how to [get started with Keras Gemma](https://ai.google.dev/gemma/docs/get_started).\n",
966-
"* Learn how to [finetune the Gemma model on GPU](https://ai.google.dev/gemma/docs/lora_tuning)."
964+
"* Learn how to [finetune the Gemma model on GPU](https://ai.google.dev/gemma/docs/core/lora_tuning)."
967965
]
968966
}
969967
],

0 commit comments

Comments
 (0)