Skip to content

Commit 0dd6f8a

Browse files
committed
docs only
1 parent ed8a886 commit 0dd6f8a

File tree

1 file changed

+9
-31
lines changed

1 file changed

+9
-31
lines changed

tensorlayer/files.py

Lines changed: 9 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ def load_mnist_dataset(shape=(-1,784), path="data/mnist/"):
3030
shape : tuple
3131
The shape of digit images, defaults is (-1,784)
3232
path : string
33-
The path that the data is downloaded to, defaults is data/mnist/
33+
The path that the data is downloaded to, defaults is ``data/mnist/``.
3434
3535
Examples
3636
--------
@@ -102,7 +102,7 @@ def load_cifar10_dataset(shape=(-1, 32, 32, 3), path='data/cifar10/', plotable=F
102102
second : int
103103
If ``plotable`` is True, ``second`` is the display time.
104104
path : string
105-
The path that the data is downloaded to, defaults is data/cifar10/
105+
The path that the data is downloaded to, defaults is ``data/cifar10/``.
106106
107107
Examples
108108
--------
@@ -221,35 +221,13 @@ def load_ptb_dataset(path='data/ptb/'):
221221
"""Penn TreeBank (PTB) dataset is used in many LANGUAGE MODELING papers,
222222
including "Empirical Evaluation and Combination of Advanced Language
223223
Modeling Techniques", "Recurrent Neural Network Regularization".
224-
225224
It consists of 929k training words, 73k validation words, and 82k test
226225
words. It has 10k words in its vocabulary.
227226
228-
In "Recurrent Neural Network Regularization", they trained regularized LSTMs
229-
of two sizes; these are denoted the medium LSTM and large LSTM. Both LSTMs
230-
have two layers and are unrolled for 35 steps. They initialize the hidden
231-
states to zero. They then use the final hidden states of the current
232-
minibatch as the initial hidden state of the subsequent minibatch
233-
(successive minibatches sequentially traverse the training set).
234-
The size of each minibatch is 20.
235-
236-
The medium LSTM has 650 units per layer and its parameters are initialized
237-
uniformly in [−0.05, 0.05]. They apply 50% dropout on the non-recurrent
238-
connections. They train the LSTM for 39 epochs with a learning rate of 1,
239-
and after 6 epochs they decrease it by a factor of 1.2 after each epoch.
240-
They clip the norm of the gradients (normalized by minibatch size) at 5.
241-
242-
The large LSTM has 1500 units per layer and its parameters are initialized
243-
uniformly in [−0.04, 0.04]. We apply 65% dropout on the non-recurrent
244-
connections. They train the model for 55 epochs with a learning rate of 1;
245-
after 14 epochs they start to reduce the learning rate by a factor of 1.15
246-
after each epoch. They clip the norm of the gradients (normalized by
247-
minibatch size) at 10.
248-
249227
Parameters
250228
----------
251229
path : : string
252-
The path that the data is downloaded to, defaults is data/ptb/
230+
The path that the data is downloaded to, defaults is ``data/ptb/``.
253231
254232
Returns
255233
--------
@@ -302,7 +280,7 @@ def load_matt_mahoney_text8_dataset(path='data/mm_test8/'):
302280
Parameters
303281
----------
304282
path : : string
305-
The path that the data is downloaded to, defaults is data/mm_test8/
283+
The path that the data is downloaded to, defaults is ``data/mm_test8/``.
306284
307285
Returns
308286
--------
@@ -336,7 +314,7 @@ def load_imdb_dataset(path='data/imdb/', nb_words=None, skip_top=0,
336314
Parameters
337315
----------
338316
path : : string
339-
The path that the data is downloaded to, defaults is data/imdb/
317+
The path that the data is downloaded to, defaults is ``data/imdb/``.
340318
341319
Examples
342320
--------
@@ -419,7 +397,7 @@ def load_nietzsche_dataset(path='data/nietzsche/'):
419397
Parameters
420398
----------
421399
path : string
422-
The path that the data is downloaded to, defaults is data/nietzsche/
400+
The path that the data is downloaded to, defaults is ``data/nietzsche/``.
423401
424402
Examples
425403
--------
@@ -447,7 +425,7 @@ def load_wmt_en_fr_dataset(path='data/wmt_en_fr/'):
447425
Parameters
448426
----------
449427
path : string
450-
The path that the data is downloaded to, defaults is data/wmt_en_fr/
428+
The path that the data is downloaded to, defaults is ``data/wmt_en_fr/``.
451429
452430
References
453431
----------
@@ -515,7 +493,7 @@ def load_flickr25k_dataset(tag='sky', path="data/flickr25k", n_threads=50, print
515493
path : string
516494
The path that the data is downloaded to, defaults is ``data/flickr25k/``.
517495
n_threads : int, number of thread to read image.
518-
printable : bool, print infomation when reading images, default is False.
496+
printable : bool, print infomation when reading images, default is ``False``.
519497
520498
Examples
521499
-----------
@@ -575,7 +553,7 @@ def load_flickr1M_dataset(tag='sky', size=10, path="data/flickr1M", n_threads=50
575553
path : string
576554
The path that the data is downloaded to, defaults is ``data/flickr25k/``.
577555
n_threads : int, number of thread to read image.
578-
printable : bool, print infomation when reading images, default is False.
556+
printable : bool, print infomation when reading images, default is ``False``.
579557
580558
Examples
581559
----------

0 commit comments

Comments
 (0)