Skip to content

Commit cd28e12

Browse files
Update to kerasHub package
1 parent 61a3666 commit cd28e12

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

site/en/gemma/docs/core/lora_tuning.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@
8383
"\n",
8484
"[Low Rank Adaptation (LoRA)](https://arxiv.org/abs/2106.09685) is a fine-tuning technique which greatly reduces the number of trainable parameters for downstream tasks by freezing the weights of the model and inserting a smaller number of new weights into the model. This makes training with LoRA much faster and more memory-efficient, and produces smaller model weights (a few hundred MBs), all while maintaining the quality of the model outputs.\n",
8585
"\n",
86-
"This tutorial walks you through using KerasNLP to perform LoRA fine-tuning on a Gemma 2B model using the [Databricks Dolly 15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k). This dataset contains 15,000 high-quality human-generated prompt / response pairs specifically designed for fine-tuning LLMs."
86+
"This tutorial walks you through using KerasHub to perform LoRA fine-tuning on a Gemma 2B model using the [Databricks Dolly 15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k). This dataset contains 15,000 high-quality human-generated prompt / response pairs specifically designed for fine-tuning LLMs."
8787
]
8888
},
8989
{
@@ -180,7 +180,7 @@
180180
"source": [
181181
"### Install dependencies\n",
182182
"\n",
183-
"Install Keras, KerasNLP, and other dependencies."
183+
"Install Keras, KerasHub, and other dependencies."
184184
]
185185
},
186186
{
@@ -192,8 +192,8 @@
192192
"outputs": [],
193193
"source": [
194194
"# Install Keras 3 last. See https://keras.io/getting_started/ for more details.\n",
195-
"!pip install -q -U keras-nlp\n",
196-
"!pip install -q -U \"keras>=3\""
195+
"!pip install -q -U keras-hub\n",
196+
"!pip install -q -U keras"
197197
]
198198
},
199199
{
@@ -230,7 +230,7 @@
230230
"source": [
231231
"### Import packages\n",
232232
"\n",
233-
"Import Keras and KerasNLP."
233+
"Import Keras and KerasHub."
234234
]
235235
},
236236
{
@@ -242,7 +242,7 @@
242242
"outputs": [],
243243
"source": [
244244
"import keras\n",
245-
"import keras_nlp"
245+
"import keras_hub"
246246
]
247247
},
248248
{
@@ -329,7 +329,7 @@
329329
"source": [
330330
"## Load Model\n",
331331
"\n",
332-
"KerasNLP provides implementations of many popular [model architectures](https://keras.io/api/keras_nlp/models/). In this tutorial, you'll create a model using `GemmaCausalLM`, an end-to-end Gemma model for causal language modeling. A causal language model predicts the next token based on previous tokens.\n",
332+
"KerasHub provides implementations of many popular [model architectures](https://keras.io/api/keras_hub/models/). In this tutorial, you'll create a model using `GemmaCausalLM`, an end-to-end Gemma model for causal language modeling. A causal language model predicts the next token based on previous tokens.\n",
333333
"\n",
334334
"Create the model using the `from_preset` method:"
335335
]
@@ -466,7 +466,7 @@
466466
}
467467
],
468468
"source": [
469-
"gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset(\"gemma2_2b_en\")\n",
469+
"gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(\"gemma2_2b_en\")\n",
470470
"gemma_lm.summary()"
471471
]
472472
},
@@ -557,7 +557,7 @@
557557
" instruction=\"What should I do on a trip to Europe?\",\n",
558558
" response=\"\",\n",
559559
")\n",
560-
"sampler = keras_nlp.samplers.TopKSampler(k=5, seed=2)\n",
560+
"sampler = keras_hub.samplers.TopKSampler(k=5, seed=2)\n",
561561
"gemma_lm.compile(sampler=sampler)\n",
562562
"print(gemma_lm.generate(prompt, max_length=256))"
563563
]
@@ -912,7 +912,7 @@
912912
" instruction=\"What should I do on a trip to Europe?\",\n",
913913
" response=\"\",\n",
914914
")\n",
915-
"sampler = keras_nlp.samplers.TopKSampler(k=5, seed=2)\n",
915+
"sampler = keras_hub.samplers.TopKSampler(k=5, seed=2)\n",
916916
"gemma_lm.compile(sampler=sampler)\n",
917917
"print(gemma_lm.generate(prompt, max_length=256))"
918918
]
@@ -993,12 +993,12 @@
993993
"source": [
994994
"## Summary and next steps\n",
995995
"\n",
996-
"This tutorial covered LoRA fine-tuning on a Gemma model using KerasNLP. Check out the following docs next:\n",
996+
"This tutorial covered LoRA fine-tuning on a Gemma model using KerasHub. Check out the following docs next:\n",
997997
"\n",
998998
"* Learn how to [generate text with a Gemma model](https://ai.google.dev/gemma/docs/get_started).\n",
999999
"* Learn how to perform [distributed fine-tuning and inference on a Gemma model](https://ai.google.dev/gemma/docs/core/distributed_tuning).\n",
10001000
"* Learn how to [use Gemma open models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma).\n",
1001-
"* Learn how to [fine-tune Gemma using KerasNLP and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb)."
1001+
"* Learn how to [fine-tune Gemma using KerasHub and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb)."
10021002
]
10031003
}
10041004
],

0 commit comments

Comments
 (0)