Skip to content

Commit 905b90d

Browse files
committed
format code
1 parent 67d4d89 commit 905b90d

File tree

12 files changed

+32
-31
lines changed

12 files changed

+32
-31
lines changed

doc/api/v2/data.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
==================================
2-
Data Reader Inferface and DataSets
2+
Data Reader Inferface and DataSets
33
==================================
44

55

@@ -78,7 +78,7 @@ imikolov
7878
:noindex:
7979

8080
movielens
81-
+++++++++
81+
+++++++++
8282

8383
.. automodule:: paddle.v2.dataset.movielens
8484
:members:

doc/api/v2/run_logic.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,11 +20,12 @@ Event
2020
=====
2121

2222
.. automodule:: paddle.v2.event
23-
:members:
23+
:members:
2424
:noindex:
2525

2626
Inference
2727
=========
2828

2929
.. autofunction:: paddle.v2.infer
30-
:noindex:
30+
:noindex:
31+

python/paddle/v2/dataset/cifar.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@
1717
This module will download dataset from https://www.cs.toronto.edu/~kriz/cifar.html and
1818
parse train/test set into paddle reader creators.
1919
20-
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000
20+
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000
2121
images per class. There are 50000 training images and 10000 test images.
2222
23-
The CIFAR-100 dataset is just like the CIFAR-10, except it has 100 classes containing
24-
600 images each. There are 500 training images and 100 testing images per class.
23+
The CIFAR-100 dataset is just like the CIFAR-10, except it has 100 classes containing
24+
600 images each. There are 500 training images and 100 testing images per class.
2525
2626
"""
2727

python/paddle/v2/dataset/conll05.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
"""
15-
Conll05 dataset.
16-
Paddle semantic role labeling Book and demo use this dataset as an example. Because
17-
Conll05 is not free in public, the default downloaded URL is test set of
18-
Conll05 (which is public). Users can change URL and MD5 to their Conll dataset.
15+
Conll05 dataset.
16+
Paddle semantic role labeling Book and demo use this dataset as an example. Because
17+
Conll05 is not free in public, the default downloaded URL is test set of
18+
Conll05 (which is public). Users can change URL and MD5 to their Conll dataset.
1919
And a pre-trained word vector model based on Wikipedia corpus is used to initialize SRL model.
2020
"""
2121

@@ -200,7 +200,7 @@ def test():
200200
Conll05 test set creator.
201201
202202
Because the train dataset is not free, the test dataset is used for training.
203-
It returns a reader creator, each sample in the reader is nine features, including sentence
203+
It returns a reader creator, each sample in the reader is nine features, including sentence
204204
sequence, predicate, predicate context, predicate context flag and tagged sequence.
205205
206206
:return: Train reader creator

python/paddle/v2/dataset/imdb.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@
1414
"""
1515
IMDB dataset.
1616
17-
This module download IMDB dataset from
18-
http://ai.stanford.edu/%7Eamaas/data/sentiment/, which contains a set of 25,000
19-
highly polar movie reviews for training, and 25,000 for testing. Besides, this
20-
module also provides API for build dictionary and parse train set and test set
17+
This module download IMDB dataset from
18+
http://ai.stanford.edu/%7Eamaas/data/sentiment/, which contains a set of 25,000
19+
highly polar movie reviews for training, and 25,000 for testing. Besides, this
20+
module also provides API for build dictionary and parse train set and test set
2121
into paddle reader creators.
2222
"""
2323

@@ -122,7 +122,7 @@ def train(word_idx):
122122
"""
123123
IMDB train set creator.
124124
125-
It returns a reader creator, each sample in the reader is an index
125+
It returns a reader creator, each sample in the reader is an index
126126
sequence and label in [0, 1].
127127
128128
:param word_idx: word dictionary
@@ -139,7 +139,7 @@ def test(word_idx):
139139
"""
140140
IMDB test set creator.
141141
142-
It returns a reader creator, each sample in the reader is an index
142+
It returns a reader creator, each sample in the reader is an index
143143
sequence and label in [0, 1].
144144
145145
:param word_idx: word dictionary

python/paddle/v2/dataset/imikolov.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ def train(word_idx, n):
9191
"""
9292
imikolov train set creator.
9393
94-
It returns a reader creator, each sample in the reader is an index
94+
It returns a reader creator, each sample in the reader is an index
9595
tuple.
9696
9797
:param word_idx: word dictionary
@@ -108,7 +108,7 @@ def test(word_idx, n):
108108
"""
109109
imikolov test set creator.
110110
111-
It returns a reader creator, each sample in the reader is an index
111+
It returns a reader creator, each sample in the reader is an index
112112
tuple.
113113
114114
:param word_idx: word dictionary

python/paddle/v2/dataset/movielens.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@
1414
"""
1515
Movielens 1-M dataset.
1616
17-
Movielens 1-M dataset contains 1 million ratings from 6000 users on 4000 movies, which was
18-
collected by GroupLens Research. This module will download Movielens 1-M dataset from
19-
http://files.grouplens.org/datasets/movielens/ml-1m.zip and parse train/test set
17+
Movielens 1-M dataset contains 1 million ratings from 6000 users on 4000 movies, which was
18+
collected by GroupLens Research. This module will download Movielens 1-M dataset from
19+
http://files.grouplens.org/datasets/movielens/ml-1m.zip and parse train/test set
2020
into paddle reader creators.
2121
2222
"""

python/paddle/v2/dataset/uci_housing.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
"""
1515
UCI Housing dataset.
1616
17-
This module will download dataset from
17+
This module will download dataset from
1818
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/ and
1919
parse train/test set into paddle reader creators.
2020
"""
@@ -75,7 +75,7 @@ def train():
7575
"""
7676
UCI_HOUSING train set creator.
7777
78-
It returns a reader creator, each sample in the reader is features after normalization
78+
It returns a reader creator, each sample in the reader is features after normalization
7979
and price number.
8080
8181
:return: Train reader creator

python/paddle/v2/dataset/wmt14.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
"""
1515
WMT14 dataset.
1616
The original WMT14 dataset is too large and a small set of data for set is provided.
17-
This module will download dataset from
17+
This module will download dataset from
1818
http://paddlepaddle.cdn.bcebos.com/demo/wmt_shrinked_data/wmt14.tgz and
1919
parse train/test set into paddle reader creators.
2020
@@ -102,7 +102,7 @@ def train(dict_size):
102102
"""
103103
WMT14 train set creator.
104104
105-
It returns a reader creator, each sample in the reader is source language word index
105+
It returns a reader creator, each sample in the reader is source language word index
106106
sequence, target language word index sequence and next word index sequence.
107107
108108
:return: Train reader creator
@@ -116,7 +116,7 @@ def test(dict_size):
116116
"""
117117
WMT14 test set creator.
118118
119-
It returns a reader creator, each sample in the reader is source language word index
119+
It returns a reader creator, each sample in the reader is source language word index
120120
sequence, target language word index sequence and next word index sequence.
121121
122122
:return: Train reader creator

python/paddle/v2/inference.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ def __reader_impl__():
4949
def iter_infer_field(self, field, **kwargs):
5050
for result in self.iter_infer(**kwargs):
5151
yield [each_result[field] for each_result in result]
52-
52+
5353
def infer(self, field='value', **kwargs):
5454
retv = None
5555
for result in self.iter_infer_field(field=field, **kwargs):

0 commit comments

Comments
 (0)