Skip to content

Commit 317b509

Browse files
authored
Fixed Broken link for word_embedding
Fixed Broken link for word_embedding notebook
1 parent 465c891 commit 317b509

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

site/en/tutorials/keras/text_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -648,7 +648,7 @@
648648
"source": [
649649
"The layers are stacked sequentially to build the classifier:\n",
650650
"\n",
651-
"1. The first layer is an `Embedding` layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`. To learn more about embeddings, see the [word embedding tutorial](../text/word_embeddings.ipynb).\n",
651+
"1. The first layer is an `Embedding` layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`. To learn more about embeddings, see the [word embedding tutorial](https://github.com/tensorflow/text/blob/master/docs/guide/word_embeddings.ipynb).\n",
652652
"2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\n",
653653
"3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units. \n",
654654
"4. The last layer is densely connected with a single output node."

0 commit comments

Comments
 (0)