Skip to content

Commit 32d53ab

Browse files
authored
Update notebook to use embedding gemma (#36035)
* Add auth for models * Update to gemma embedding model
1 parent 1982627 commit 32d53ab

File tree

1 file changed

+27
-3
lines changed

1 file changed

+27
-3
lines changed

examples/notebooks/beam-ml/data_preprocessing/huggingface_text_embeddings.ipynb

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
{
4646
"cell_type": "markdown",
4747
"source": [
48-
"# Generate text embeddings by using Hugging Face Hub models\n",
48+
"# Generate text embeddings by using the EmbeddingGemma model from Hugging Face\n",
4949
"\n",
5050
"<table align=\"left\">\n",
5151
" <td>\n",
@@ -75,6 +75,8 @@
7575
"\n",
7676
"This notebook uses Apache Beam's `MLTransform` to generate embeddings from text data.\n",
7777
"\n",
78+
"Using a small, highly efficient open model like EmbeddingGemma at the core of your pipeline makes the entire process self-contained, which can simplify management by eliminating the need for external network calls to other services for the embedding step. Because it's an open model, it can be hosted entirely within Dataflow. This provides the confidence to securely process large-scale, private datasets. For more information about the model, see the [model card](https://huggingface.co/google/embeddinggemma-300m)\n",
79+
"\n",
7880
"Hugging Face's [`SentenceTransformers`](https://huggingface.co/sentence-transformers) framework uses Python to generate sentence, text, and image embeddings.\n",
7981
"\n",
8082
"To generate text embeddings that use Hugging Face models and `MLTransform`, use the `SentenceTransformerEmbeddings` module to specify the model configuration.\n"
@@ -120,6 +122,28 @@
120122
"execution_count": 29,
121123
"outputs": []
122124
},
125+
{
126+
"cell_type": "markdown",
127+
"source": [
128+
"### Authenticate with HuggingFace\n",
129+
"\n",
130+
"To ensure that you can pull the correct model, authenticate with HuggingFace by following the prompts in the cell."
131+
],
132+
"metadata": {
133+
"id": "kXDM8C7d3nPW"
134+
}
135+
},
136+
{
137+
"cell_type": "code",
138+
"source": [
139+
"!hf auth login"
140+
],
141+
"metadata": {
142+
"id": "jVxSi2jS3M3c"
143+
},
144+
"execution_count": 29,
145+
"outputs": []
146+
},
123147
{
124148
"cell_type": "markdown",
125149
"source": [
@@ -170,7 +194,7 @@
170194
" {'x': \"Should I sign up for Medicare Part B if I have Veterans' Benefits?\"}\n",
171195
"]\n",
172196
"\n",
173-
"text_embedding_model_name = 'sentence-transformers/sentence-t5-large'\n",
197+
"text_embedding_model_name = 'google/embeddinggemma-300m'\n",
174198
"\n",
175199
"\n",
176200
"# helper function that returns a dict containing only first\n",
@@ -191,7 +215,7 @@
191215
"source": [
192216
"\n",
193217
"### Generate text embeddings\n",
194-
"This example uses the model `sentence-transformers/sentence-t5-large` to generate text embeddings. The model uses only the encoder from a `T5-large model`. The weights are stored in FP16. For more information about the model, see [Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models](https://arxiv.org/abs/2108.08877)."
218+
"This example uses the model `google/embeddinggemma-300m` to generate text embeddings. For more information about the model, see [the model card](https://huggingface.co/google/embeddinggemma-300m)."
195219
],
196220
"metadata": {
197221
"id": "SApMmlRLRv_e"

0 commit comments

Comments
 (0)