Skip to content

Commit 758cf12

Browse files
authored
Fixed typos and documentation
1 parent fdc7e32 commit 758cf12

File tree

1 file changed

+14
-15
lines changed

1 file changed

+14
-15
lines changed

tutorials/BertTutorial.ipynb

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"This tutorial shows how to convert the original Tensorflow Bert model to ONNX. \n",
1010
"In this example we fine tune Bert for squad-1.1 on top of [BERT-Base, Uncased](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip).\n",
1111
"\n",
12-
"Since this tutorial cares mostly about the conversion process we reuse tokenizer and utilities defined in the Bert source tree as much as possible.\n",
12+
"Since this tutorial cares mostly about the conversion process, we reuse tokenizer and utilities defined in the Bert source tree as much as possible.\n",
1313
"\n",
1414
"This should work with all versions supported by the [tensorflow-onnx converter](https://github.com/onnx/tensorflow-onnx), we used the following versions while writing the tutorial:\n",
1515
"```\n",
@@ -27,7 +27,7 @@
2727
"metadata": {},
2828
"source": [
2929
"## Step 1 - define some environment variables\n",
30-
"Before we start, lets setup some variables where to find things."
30+
"Before we start, let's set up some variables for where to find things."
3131
]
3232
},
3333
{
@@ -96,21 +96,20 @@
9696
"outputs": [],
9797
"source": [
9898
"!wget -q https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\n",
99-
"!unzip /uncased_L-12_H-768_A-12.zip\n",
99+
"!unzip uncased_L-12_H-768_A-12.zip\n",
100100
"\n",
101101
"!mkdir squad-1.1 out\n",
102102
"\n",
103103
"!wget -O squad-1.1/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json \n",
104-
"!wget -O squad-1.1/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json \n",
105-
"!wget -O squad-1.1/evaluate-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/evaluate-v1.1.json "
104+
"!wget -O squad-1.1/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json \n"
106105
]
107106
},
108107
{
109108
"cell_type": "markdown",
110109
"metadata": {},
111110
"source": [
112111
"## Step 4 - fine tune the Bert model for squad-1.1\n",
113-
"This is the same as described in the [Bert repository](https://github.com/google-research/bert). You need to do this only once.\n"
112+
"This is the same as described in the [Bert repository](https://github.com/google-research/bert). This only needs to be done once.\n"
114113
]
115114
},
116115
{
@@ -121,7 +120,7 @@
121120
"source": [
122121
"#\n",
123122
"# finetune bert for squad-1.1\n",
124-
"# this may take a bit\n",
123+
"# this will take around 3 hours to complete, and even longer if your device does not have a GPU \n",
125124
"#\n",
126125
"\n",
127126
"!cd bert && \\\n",
@@ -146,9 +145,9 @@
146145
"metadata": {},
147146
"source": [
148147
"## Step 5 - create the inference graph and save it\n",
149-
"With a fined tuned model in hands we want to create the inference graph for it and save it as saved_model format.\n",
148+
"With a fine-tuned model in hands we want to create the inference graph for it and save it as saved_model format.\n",
150149
"\n",
151-
"***We assune that after 2 epochs the checkpoint is model.ckpt-21899 - if the following code does not find it, check the $OUT directory for the higest checkpoint***."
150+
"***We assume that after 2 epochs the checkpoint is model.ckpt-21899 - if the following code does not find it, check the $OUT directory for the higest checkpoint***."
152151
]
153152
},
154153
{
@@ -202,7 +201,7 @@
202201
"cell_type": "markdown",
203202
"metadata": {},
204203
"source": [
205-
"Create the model and run predictions on all data and save the results so we can compare them later to the onnxruntime version."
204+
"Create the model, run predictions on all data, and save the results to later compare them to the onnxruntime version."
206205
]
207206
},
208207
{
@@ -309,7 +308,7 @@
309308
"scrolled": true
310309
},
311310
"source": [
312-
"Now lets create the inference graph and save it."
311+
"Now let's create the inference graph and save it."
313312
]
314313
},
315314
{
@@ -340,7 +339,7 @@
340339
"source": [
341340
"## Step 6 - convert to ONNX\n",
342341
"\n",
343-
"Convert the model from tensorflow to onnx using https://github.com/onnx/tensorflow-onnx."
342+
"Convert the model from Tensorflow to ONNX using https://github.com/onnx/tensorflow-onnx."
344343
]
345344
},
346345
{
@@ -393,7 +392,7 @@
393392
"cell_type": "markdown",
394393
"metadata": {},
395394
"source": [
396-
"Lets look at the inputs to the ONNX model. The input 'unique_ids' is special and creates some issue in ONNX: the input is passed directly to the output and in Tensorflow both have the same name. In ONNX that is not supported and the converter creates a new name for the input. We need to use that created name so we remember it."
395+
"Let's look at the inputs to the ONNX model. The input 'unique_ids' is special and creates an issue in ONNX: the input passed directly to the output and in Tensorflow both have the same name. Because that is not supported in ONNX, the converter creates a new name for the input. We need to use that created name as to remember it."
397396
]
398397
},
399398
{
@@ -552,9 +551,9 @@
552551
"source": [
553552
"## Summary\n",
554553
"\n",
555-
"That was all it takes to convert a relativly complex model from Tensorflow to ONNX. \n",
554+
"That was all it takes to convert a relatively complex model from Tensorflow to ONNX. \n",
556555
"\n",
557-
"You find more documentation about tensorflow-onnx [here](https://github.com/onnx/tensorflow-onnx)."
556+
"You can find more documentation about tensorflow-onnx [here](https://github.com/onnx/tensorflow-onnx)."
558557
]
559558
},
560559
{

0 commit comments

Comments
 (0)