Skip to content

Commit e8f6d34

Browse files
authored
Merge pull request #470 from IAmSuyogJadhav/patch-1
Fixed various typos, Punctuation and grammar mistakes.
2 parents b9115e0 + 5eea227 commit e8f6d34

File tree

1 file changed

+24
-24
lines changed

1 file changed

+24
-24
lines changed

docs/user/tutorial.rst

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,11 @@ For deep learning, this tutorial will walk you through building handwritten
88
digits classifiers using the MNIST dataset, arguably the "Hello World" of neural
99
networks. For reinforcement learning, we will let computer learns to play Pong
1010
game from the original screen inputs. For nature language processing, we start
11-
from word embedding, and then describe language modeling and machine
11+
from word embedding and then describe language modeling and machine
1212
translation.
1313

1414
This tutorial includes all modularized implementation of Google TensorFlow Deep
15-
Learning tutorial, so you could read TensorFlow Deep Learning tutorial as the same time
15+
Learning tutorial, so you could read TensorFlow Deep Learning tutorial at the same time
1616
`[en] <https://www.tensorflow.org/versions/master/tutorials/index.html>`_ `[cn] <http://wiki.jikexueyuan.com/project/tensorflow-zh/>`_ .
1717

1818
.. note::
@@ -26,7 +26,7 @@ Before we start
2626

2727
The tutorial assumes that you are somewhat familiar with neural networks and
2828
TensorFlow (the library which `TensorLayer`_ is built on top of). You can try to learn
29-
the basic of neural network from the `Deeplearning Tutorial`_.
29+
the basics of a neural network from the `Deeplearning Tutorial`_.
3030

3131
For a more slow-paced introduction to artificial neural networks, we recommend
3232
`Convolutional Neural Networks for Visual Recognition`_ by Andrej Karpathy et
@@ -117,9 +117,9 @@ Run the MNIST example
117117
:align: center
118118

119119
In the first part of the tutorial, we will just run the MNIST example that's
120-
included in the source distribution of `TensorLayer`_. MNIST dataset contains 60000
121-
handwritten digits that is commonly used for training various
122-
image processing systems, each of digit has 28x28 pixels.
120+
included in the source distribution of `TensorLayer`_. The MNIST dataset contains 60000
121+
handwritten digits that are commonly used for training various
122+
image processing systems. Each digit is 28x28 pixels in size.
123123

124124
We assume that you have already run through the :ref:`installation`. If you
125125
haven't done so already, get a copy of the source tree of TensorLayer, and navigate
@@ -265,7 +265,7 @@ For Convolutional Neural Network example, the MNIST can be load as 4D version as
265265
tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1))
266266
267267
``X_train.shape`` is ``(50000, 28, 28, 1)`` which represents 50,000 images with 1 channel, 28 rows and 28 columns each.
268-
Channel one is because it is a grey scale image, every pixel have only one value.
268+
Channel one is because it is a grey scale image, every pixel has only one value.
269269

270270
Building the model
271271
------------------
@@ -280,10 +280,10 @@ As mentioned above, ``tutorial_mnist.py`` supports four types of models, and we
280280
implement that via easily exchangeable functions of the same interface.
281281
First, we'll define a function that creates a Multi-Layer Perceptron (MLP) of
282282
a fixed architecture, explaining all the steps in detail. We'll then implement
283-
a Denosing Autoencoder (DAE), after that we will then stack all Denoising Autoencoder and
283+
a Denoising Autoencoder (DAE), after that we will then stack all Denoising Autoencoder and
284284
supervised fine-tune them. Finally, we'll show how to create a
285285
Convolutional Neural Network (CNN). In addition, a simple example for MNIST
286-
dataset in ``tutorial_mnist_simple.py``, a CNN example for CIFAR-10 dataset in
286+
dataset in ``tutorial_mnist_simple.py``, a CNN example for the CIFAR-10 dataset in
287287
``tutorial_cifar10_tfrecord.py``.
288288

289289

@@ -295,9 +295,9 @@ The first script, ``main_test_layers()``, creates an MLP of two hidden layers of
295295
dropout to the input data and 50% dropout to the hidden layers.
296296

297297
To feed data into the network, TensofFlow placeholders need to be defined as follow.
298-
The ``None`` here means the network will accept input data of arbitrary batchsize after compilation.
298+
The ``None`` here means the network will accept input data of arbitrary batch size after compilation.
299299
The ``x`` is used to hold the ``X_train`` data and ``y_`` is used to hold the ``y_train`` data.
300-
If you know the batchsize beforehand and do not need this flexibility, you should give the batchsize
300+
If you know the batch size beforehand and do not need this flexibility, you should give the batch size
301301
here -- especially for convolutional layers, this can allow TensorFlow to apply
302302
some optimizations.
303303

@@ -369,14 +369,14 @@ need the output layer(s) to access a network in TensorLayer:
369369
cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y, y_))
370370
371371
Here, ``network.outputs`` is the 10 identity outputs from the network (in one hot format), ``y_op`` is the integer
372-
output represents the class index. While ``cost`` is the cross-entropy between target and predicted labels.
372+
output represents the class index. While ``cost`` is the cross-entropy between the target and the predicted labels.
373373

374374
Denoising Autoencoder (DAE)
375375
--------------------------------------
376376

377377
Autoencoder is an unsupervised learning model which is able to extract representative features,
378378
it has become more widely used for learning generative models of data and Greedy layer-wise pre-train.
379-
For vanilla Autoencoder see `Deeplearning Tutorial`_.
379+
For vanilla Autoencoder, see `Deeplearning Tutorial`_.
380380

381381
The script ``main_test_denoise_AE()`` implements a Denoising Autoencoder with corrosion rate of 50%.
382382
The Autoencoder can be defined as follow, where an Autoencoder is represented by a ``DenseLayer``:
@@ -395,9 +395,9 @@ The Autoencoder can be defined as follow, where an Autoencoder is represented by
395395
To train the ``DenseLayer``, simply run ``ReconLayer.pretrain()``, if using denoising Autoencoder, the name of
396396
corrosion layer (a ``DropoutLayer``) need to be specified as follow. To save the feature images, set ``save`` to ``True``.
397397
There are many kinds of pre-train metrices according to different architectures and applications. For sigmoid activation,
398-
the Autoencoder can be implemented by using KL divergence, while for rectifer, L1 regularization of activation outputs
398+
the Autoencoder can be implemented by using KL divergence, while for rectifier, L1 regularization of activation outputs
399399
can make the output to be sparse. So the default behaviour of ``ReconLayer`` only provide KLD and cross-entropy for sigmoid
400-
activation function and L1 of activation outputs and mean-squared-error for rectifing activation function.
400+
activation function and L1 of activation outputs and mean-squared-error for rectifying activation function.
401401
We recommend you to modify ``ReconLayer`` to achieve your own pre-train metrice.
402402

403403
.. code-block:: python
@@ -486,7 +486,7 @@ see :mod:`tensorlayer.cost` for more.
486486
Apart from using ``network.all_params`` to get the variables, we can also use ``tl.layers.get_variables_with_name`` to get the specific variables by string name.
487487

488488
Having the model and the loss function here, we create update expression/operation
489-
for training the network. TensorLayer do not provide many optimizers, we used TensorFlow's
489+
for training the network. TensorLayer does not provide many optimizers, we used TensorFlow's
490490
optimizer instead:
491491

492492
.. code-block:: python
@@ -505,7 +505,7 @@ For training the network, we fed data and the keeping probabilities to the ``fee
505505
sess.run(train_op, feed_dict=feed_dict)
506506
507507
While, for validation and testing, we use slightly different way. All
508-
Dropout, Dropconnect, Corrosion layers need to be disable.
508+
Dropout, Dropconnect, Corrosion layers need to be disabled.
509509
We use ``tl.utils.dict_to_one`` to set all ``network.all_drop`` to 1.
510510

511511
.. code-block:: python
@@ -593,9 +593,9 @@ If everything is set up correctly, you will get an output like the following:
593593
episode 1: game 5 took 0.17348s, reward: -1.000000
594594
episode 1: game 6 took 0.09415s, reward: -1.000000
595595
596-
This example allow neural network to learn how to play Pong game from the screen inputs,
596+
This example allows the neural network to learn how to play Pong game from the screen inputs,
597597
just like human behavior.
598-
The neural network will play with a fake AI player, and lean to beat it.
598+
The neural network will play with a fake AI player and learn to beat it.
599599
After training for 15,000 episodes, the neural network can
600600
win 20% of the games. The neural network win 35% of the games at 20,000 episode,
601601
we can seen the neural network learn faster and faster as it has more winning data to
@@ -994,7 +994,7 @@ Understand LSTM
994994
Recurrent Neural Network
995995
-------------------------
996996

997-
We personally think Andrey Karpathy's blog is the best material to
997+
We personally think Andrej Karpathy's blog is the best material to
998998
`Understand Recurrent Neural Network`_ , after reading that, Colah's blog can
999999
help you to `Understand LSTM Network`_ `[chinese] <http://dataunion.org/9331.html>`_
10001000
which can solve The Problem of Long-Term
@@ -1003,7 +1003,7 @@ before you go on.
10031003

10041004
.. image:: my_figs/karpathy_rnn.jpeg
10051005

1006-
Image by Andrey Karpathy
1006+
Image by Andrej Karpathy
10071007

10081008

10091009
Synced sequence input and output
@@ -1283,7 +1283,7 @@ In Example page, we provide many examples include Seq2seq, different type of Adv
12831283
This script is going to training a neural network to translate English to French.
12841284
If everything is correct, you will see.
12851285

1286-
- Download WMT English-to-French translation data, includes training and testing data.
1286+
- Download the WMT English-to-French translation data, it includes both the training and the testing data.
12871287
- Create vocabulary files for English and French from training data.
12881288
- Create the tokenized training and testing data from original training and
12891289
testing data.
@@ -1706,7 +1706,7 @@ In Example page, we provide many examples include Seq2seq, different type of Adv
17061706
---------
17071707

17081708
Sequence to sequence model is commonly be used to translate a language to another.
1709-
Actually it can do many thing you can't imagine, we can translate
1709+
Actually, it can do many thing you can't imagine, we can translate
17101710
a long sentence into short and simple sentence, for example, translation going
17111711
from Shakespeare to modern English.
17121712
With CNN, we can also translate a video into a sentence, i.e. video captioning.
@@ -1807,7 +1807,7 @@ In Example page, we provide many examples include Seq2seq, different type of Adv
18071807
18081808
buckets = [(5, 10), (10, 15), (20, 25), (40, 50)]
18091809
1810-
If the input is an English sentence with ``3`` tokens, and the corresponding
1810+
If the input is an English sentence with ``3`` tokens and the corresponding
18111811
output is a French sentence with ``6`` tokens, then they will be put in the
18121812
first bucket and padded to length ``5`` for encoder inputs (English sentence),
18131813
and length ``10`` for decoder inputs.

0 commit comments

Comments
 (0)