Skip to content

Commit 5409338

Browse files
Merge pull request #2033 from mohantym:patch-1
PiperOrigin-RevId: 430334914
2 parents bfe577b + 39c0b9e commit 5409338

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

site/en/tutorials/keras/text_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -648,7 +648,7 @@
648648
"source": [
649649
"The layers are stacked sequentially to build the classifier:\n",
650650
"\n",
651-
"1. The first layer is an `Embedding` layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`. To learn more about embeddings, see the [word embedding tutorial](../text/word_embeddings.ipynb).\n",
651+
"1. The first layer is an `Embedding` layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`. To learn more about embeddings, check out the [Word embeddings](https://www.tensorflow.org/text/guide/word_embeddings) tutorial.\n",
652652
"2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\n",
653653
"3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units. \n",
654654
"4. The last layer is densely connected with a single output node."

0 commit comments

Comments
 (0)