Skip to content

Commit 2e3d17e

Browse files
committed
nbfmt
1 parent 0e0d6f2 commit 2e3d17e

File tree

1 file changed

+3
-13
lines changed

1 file changed

+3
-13
lines changed

site/en/tutorials/text/image_captioning.ipynb

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -486,9 +486,7 @@
486486
"source": [
487487
"### Image feature extractor\n",
488488
"\n",
489-
"You will use an image model (pretrained on imagenet) to extract the features from each image. The model was trained as an image classifier, but setting `include_top=False` returns the model without the final classification layer, so you can use the last layer of feature-maps: \n",
490-
"\n",
491-
"\n"
489+
"You will use an image model (pretrained on imagenet) to extract the features from each image. The model was trained as an image classifier, but setting `include_top=False` returns the model without the final classification layer, so you can use the last layer of feature-maps: \n"
492490
]
493491
},
494492
{
@@ -1053,8 +1051,6 @@
10531051
"id": "qiRXWwIKNybB"
10541052
},
10551053
"source": [
1056-
"\n",
1057-
"\n",
10581054
"The model will be implemented in three main parts: \n",
10591055
"\n",
10601056
"1. Input - The token embedding and positional encoding (`SeqEmbedding`).\n",
@@ -1164,8 +1160,7 @@
11641160
" attn = self.mha(query=x, value=x,\n",
11651161
" use_causal_mask=True)\n",
11661162
" x = self.add([x, attn])\n",
1167-
" return self.layernorm(x)\n",
1168-
"\n"
1163+
" return self.layernorm(x)\n"
11691164
]
11701165
},
11711166
{
@@ -1305,8 +1300,6 @@
13051300
"id": "6WQD87efena5"
13061301
},
13071302
"source": [
1308-
"\n",
1309-
"\n",
13101303
"But there are a few other features you can add to make this work a little better:\n",
13111304
"\n",
13121305
"1. **Handle bad tokens**: The model will be generating text. It should\n",
@@ -1484,8 +1477,7 @@
14841477
"1. Flatten the extracted image features, so they can be input to the decoder layers.\n",
14851478
"2. Look up the token embeddings.\n",
14861479
"3. Run the stack of `DecoderLayer`s, on the image features and text embeddings.\n",
1487-
"4. Run the output layer to predict the next token at each position.\n",
1488-
"\n"
1480+
"4. Run the output layer to predict the next token at each position.\n"
14891481
]
14901482
},
14911483
{
@@ -2144,8 +2136,6 @@
21442136
"colab": {
21452137
"collapsed_sections": [],
21462138
"name": "image_captioning.ipynb",
2147-
"private_outputs": true,
2148-
"provenance": [],
21492139
"toc_visible": true
21502140
},
21512141
"kernelspec": {

0 commit comments

Comments
 (0)