Skip to content

Commit 5432443

Browse files
Merge pull request #2231 from abhigyadufare:patch-3
PiperOrigin-RevId: 548992939
2 parents f322f97 + bdb6904 commit 5432443

File tree

2 files changed

+16
-17
lines changed

2 files changed

+16
-17
lines changed

site/en/r1/tutorials/sequences/text_generation.ipynb

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -77,9 +77,9 @@
7777
"id": "BwpJ5IffzRG6"
7878
},
7979
"source": [
80-
"This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data (\"Shakespear\"), train a model to predict the next character in the sequence (\"e\"). Longer sequences of text can be generated by calling the model repeatedly.\n",
80+
"This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data (\"Shakespear\"), train a model to predict the next character in the sequence (\"e\"). Longer sequences of text can be generated by calling the model repeatedly.\n",
8181
"\n",
82-
"Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.\n",
82+
"Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*. If running locally make sure TensorFlow version >= 1.11.\n",
8383
"\n",
8484
"This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string \"Q\":\n",
8585
"\n",
@@ -98,7 +98,7 @@
9898
"To watch the next way with his father with his face?\n",
9999
"\n",
100100
"ESCALUS:\n",
101-
"The cause why then we are all resolved more sons.\n",
101+
"The cause why then us all resolved more sons.\n",
102102
"\n",
103103
"VOLUMNIA:\n",
104104
"O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,\n",
@@ -248,7 +248,7 @@
248248
"source": [
249249
"### Vectorize the text\n",
250250
"\n",
251-
"Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters."
251+
"Before training, you need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters."
252252
]
253253
},
254254
{
@@ -272,7 +272,7 @@
272272
"id": "tZfqhkYCymwX"
273273
},
274274
"source": [
275-
"Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`."
275+
"Now you have an integer representation for each character. Notice that you mapped the character as indexes from 0 to `len(unique)`."
276276
]
277277
},
278278
{
@@ -316,7 +316,7 @@
316316
"id": "wssHQ1oGymwe"
317317
},
318318
"source": [
319-
"Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.\n",
319+
"Given a character, or a sequence of characters, what is the most probable next character? This is the task you are training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.\n",
320320
"\n",
321321
"Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?\n"
322322
]
@@ -346,7 +346,7 @@
346346
},
347347
"outputs": [],
348348
"source": [
349-
"# The maximum length sentence we want for a single input in characters\n",
349+
"# The maximum length sentence you want for a single input in characters\n",
350350
"seq_length = 100\n",
351351
"examples_per_epoch = len(text)//seq_length\n",
352352
"\n",
@@ -458,7 +458,7 @@
458458
"source": [
459459
"### Create training batches\n",
460460
"\n",
461-
"We used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches."
461+
"You used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches."
462462
]
463463
},
464464
{
@@ -650,7 +650,7 @@
650650
"id": "uwv0gEkURfx1"
651651
},
652652
"source": [
653-
"To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.\n",
653+
"To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.\n",
654654
"\n",
655655
"Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.\n",
656656
"\n",
@@ -746,7 +746,7 @@
746746
"source": [
747747
"The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.\n",
748748
"\n",
749-
"Because our model returns logits, we need to set the `from_logits` flag.\n"
749+
"Because our model returns logits, you need to set the `from_logits` flag.\n"
750750
]
751751
},
752752
{
@@ -771,7 +771,7 @@
771771
"id": "jeOXriLcymww"
772772
},
773773
"source": [
774-
"Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.train.AdamOptimizer` with default arguments and the loss function."
774+
"Configure the training procedure using the `tf.keras.Model.compile` method. You'll use `tf.train.AdamOptimizer` with default arguments and the loss function."
775775
]
776776
},
777777
{
@@ -891,7 +891,7 @@
891891
"\n",
892892
"Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.\n",
893893
"\n",
894-
"To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.\n"
894+
"To run the model with a different `batch_size`, you need to rebuild the model and restore the weights from the checkpoint.\n"
895895
]
896896
},
897897
{
@@ -992,7 +992,7 @@
992992
" predictions = predictions / temperature\n",
993993
" predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()\n",
994994
"\n",
995-
" # We pass the predicted word as the next input to the model\n",
995+
" # You pass the predicted word as the next input to the model\n",
996996
" # along with the previous hidden state\n",
997997
" input_eval = tf.expand_dims([predicted_id], 0)\n",
998998
"\n",
@@ -1035,11 +1035,11 @@
10351035
"\n",
10361036
"So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output.\n",
10371037
"\n",
1038-
"We will use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/r1/guide/eager).\n",
1038+
"You will use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/r1/guide/eager).\n",
10391039
"\n",
10401040
"The procedure works as follows:\n",
10411041
"\n",
1042-
"* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.\n",
1042+
"* First, initialize the RNN state. You do this by calling the `tf.keras.Model.reset_states` method.\n",
10431043
"\n",
10441044
"* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.\n",
10451045
"\n",

site/en/tutorials/video/video_classification.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -499,7 +499,7 @@
499499
"id": "I-fCAddqEORZ"
500500
},
501501
"source": [
502-
"A ResNet model resnet model is made from a sequence of residual blocks.\n",
502+
"A ResNet model is made from a sequence of residual blocks.\n",
503503
"A residual block has two branches. The main branch performs the calculation, but is difficult for gradients to flow through.\n",
504504
"The residual branch bypasses the main calculation and mostly just adds the input to the output of the main branch.\n",
505505
"Gradients flow easily through this branch.\n",
@@ -1049,7 +1049,6 @@
10491049
"accelerator": "GPU",
10501050
"colab": {
10511051
"name": "video_classification.ipynb",
1052-
"provenance": [],
10531052
"toc_visible": true
10541053
},
10551054
"kernelspec": {

0 commit comments

Comments
 (0)