Skip to content

Commit 54497ae

Browse files
lgarithmluomai
authored andcommitted
Fix issues suggested by codacy. (#344)
* remove two dangerous default values * fix mnist tutorial based on codacy * address hao's comments. * remove unused y_op * hao conv.py * hao prepro.py * hao files.py * remove str statement * hao example mnist * yapf * hao cifar10 * hao inceptionv3 * hao ptb tfrecord image processing * hao tutorials * str comment * str docs * Update README.md * remove unused code * minor fix * small fix
1 parent e883032 commit 54497ae

31 files changed

+246
-267
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
[![Documentation Status](https://readthedocs.org/projects/tensorlayer/badge/?version=latest)](http://tensorlayer.readthedocs.io/en/latest/?badge=latest)
2020
[![Docker Pulls](https://img.shields.io/docker/pulls/tensorlayer/tensorlayer.svg?maxAge=604800)](https://hub.docker.com/r/tensorlayer/tensorlayer/)
2121

22-
TensorLayer is a deep learning and reinforcement learning library on top of [TensorFlow](https://www.tensorflow.org). It provides rich neural layers and utility functions to help researchers and engineers build real-world AI applications. TensorLayer is awarded the 2017 Best Open Source Software Award by the prestigious [ACM Multimedia Society](http://www.acmmm.org/2017/mm-2017-awardees/).
22+
TensorLayer is a deep learning and reinforcement learning library on top of [TensorFlow](https://www.tensorflow.org). It provides rich neural layers and utility functions to help researchers and engineers build real-world AI applications. TensorLayer is awarded the 2017 Best Open Source Software by the prestigious [ACM Multimedia Society](http://www.acmmm.org/2017/mm-2017-awardees/).
2323

2424
- Useful links: [Documentation](http://tensorlayer.readthedocs.io), [Examples](http://tensorlayer.readthedocs.io/en/latest/user/example.html), [中文文档](https://tensorlayercn.readthedocs.io), [中文书](http://www.broadview.com.cn/book/5059)
2525

@@ -116,7 +116,7 @@ Examples can be found [in this folder](https://github.com/zsdonghao/tensorlayer/
116116
- Float 16 half-precision model, see [tutorial\_mnist_float16.py](https://github.com/zsdonghao/tensorlayer/blob/master/example/tutorial_mnist_float16.py)
117117

118118
## Notes
119-
TensorLayer provides two set of Convolutional layer APIs, see [(Professional)](http://tensorlayer.readthedocs.io/en/latest/modules/layers.html#convolutional-layer-pro) and [(Simplified)](http://tensorlayer.readthedocs.io/en/latest/modules/layers.html#convolutional-layer-simplified) on readthedocs website.
119+
TensorLayer provides two set of Convolutional layer APIs, see [(Advanced)](http://tensorlayer.readthedocs.io/en/latest/modules/layers.html#convolutional-layer-pro) and [(Basic)](http://tensorlayer.readthedocs.io/en/latest/modules/layers.html#convolutional-layer-simplified) on readthedocs website.
120120
<!--
121121
* If you get into trouble, you can start a discussion on [Slack](https://join.slack.com/t/tensorlayer/shared_invite/MjI1NjQ5NTUxOTY5LTE1MDI3MDYwNTItYzYwNmFiZmZkOA), [Gitter](https://gitter.im/tensorlayer/Lobby#?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge>),
122122
[Help Wanted Issues](https://waffle.io/zsdonghao/tensorlayer),
@@ -128,11 +128,11 @@ TensorLayer provides two set of Convolutional layer APIs, see [(Professional)](h
128128

129129
## Design Philosophy
130130

131-
As deep learning practitioners, we have been looking for a TensorFlow wrapper library that can serve for various development phases. This library is easy for beginners by offering rich neural network implementations,
132-
examples and tutorials. Later, its APIs do not prohibit users from manipulating the low-level powerful features of TensorFlow, which is necessary in tackling real-world problems. In the end, the extra wrappers shall not compromise TensorFlow performance, and thus suit for production deployment. TensorLayer is a novel library that aims to satisfy these requirements that can occur in various phases. It has three key features:
131+
As TensorFlow users, we have been looking for a library that can serve for various development phases. This library is easy for beginners by providing rich neural network implementations,
132+
examples and tutorials. Later, its APIs shall naturally allow users to leverage the powerful features of TensorFlow, exhibiting best performance in addressing real-world problems. In the end, the extra abstraction shall not compromise TensorFlow performance, and thus suit for production deployment. TensorLayer is a novel library that aims to satisfy these requirements. It has three key features:
133133

134-
- *Simplicity* : TensorLayer lifts the low-level dataflow abstraction of TensorFlow to **high-level** layers. It also provides users with massive examples and tutorials to help bootstrap.
135-
- *Flexibility* : TensorLayer APIs are transparent: it does not mask TensorFlow from users but leaving massive hooks that allow **low-level tuning**.
134+
- *Simplicity* : TensorLayer lifts the low-level dataflow abstraction of TensorFlow to **high-level** layers. It also provides users with massive examples and tutorials to minimize learning barrier.
135+
- *Flexibility* : TensorLayer APIs are transparent: it does not mask TensorFlow from users; but leaving massive hooks that support diverse **low-level tuning**.
136136
- *Zero-cost Abstraction* : TensorLayer is able to achieve the **full performance** of TensorFlow.
137137

138138
## Negligible Overhead
@@ -150,12 +150,12 @@ on a Titan X Pascal GPU. Here are the training speeds of respective tasks:
150150

151151
Similar to TensorLayer, Keras and TFLearn are also popular TensorFlow wrapper libraries.
152152
These libraries are comfortable to start with. They provide high-level abstractions;
153-
but in turn mask the underlying engine features from users. Though good for bootstrap,
154-
it becomes hard to manipulate the low-level powerful features of TensorFlow.
153+
but mask the underlying engine from users. It is thus hard to
154+
customize model behaviors and touch the essential features of TensorFlow.
155155
Without compromise in simplicity, TensorLayer APIs are generally more flexible and transparent.
156156
Users often find it easy to start with the examples and tutorials of TensorLayer, and
157157
then dive into the TensorFlow low-level APIs only if need.
158-
TensorLayer does not intend to create library lock-in. Users can easily import models from Keras, TFSlim and TFLearn into
158+
TensorLayer does not create library lock-in. Users can easily import models from Keras, TFSlim and TFLearn into
159159
a TensorLayer environment.
160160

161161

example/tutorial_atari_pong.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
#! /usr/bin/python
22
# -*- coding: utf-8 -*-
3-
""" Monte-Carlo Policy Network π(a|s) (REINFORCE)
3+
""" Monte-Carlo Policy Network π(a|s) (REINFORCE).
44
55
To understand Reinforcement Learning, we let computer to learn how to play
66
Pong game from the original screen inputs. Before we start, we highly recommend

example/tutorial_bipedalwalker_a3c_continuous_action.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828
speed, angular velocity, horizontal speed, vertical speed, position of joints
2929
and joints angular speed, legs contact with ground, and 10 lidar rangefinder
3030
measurements. There's no coordinates in the state vector.
31+
3132
"""
3233

3334
import multiprocessing
@@ -181,7 +182,7 @@ def work(self):
181182
if self.name == 'Worker_0' and total_step % 30 == 0:
182183
self.env.render()
183184
a = self.AC.choose_action(s)
184-
s_, r, done, info = self.env.step(a)
185+
s_, r, done, _info = self.env.step(a)
185186

186187
# set robot falls reward to -2 instead of -100
187188
if r == -100: r = -2

example/tutorial_cartpole_ac.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
"""
2-
Actor-Critic using TD-error as the Advantage, Reinforcement Learning.
1+
"""Actor-Critic using TD-error as the Advantage, Reinforcement Learning.
32
43
Actor Critic History
54
----------------------
@@ -30,6 +29,7 @@
3029
A reward of +1 is provided for every timestep that the pole remains upright.
3130
The episode ends when the pole is more than 15 degrees from vertical, or the
3231
cart moves more than 2.4 units from the center.
32+
3333
"""
3434

3535
import time

example/tutorial_cifar10.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,13 @@
11
#! /usr/bin/python
22
# -*- coding: utf-8 -*-
3-
""" tl.prepro for data augmentation """
43

5-
import io
6-
import os
4+
# tl.prepro for data augmentation
5+
76
import time
87

98
import numpy as np
109
import tensorflow as tf
1110
import tensorlayer as tl
12-
from PIL import Image
1311
from tensorlayer.layers import *
1412

1513
sess = tf.InteractiveSession()

example/tutorial_cifar10_tfrecord.py

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,6 @@
11
#! /usr/bin/python
22
# -*- coding: utf-8 -*-
3-
4-
import io
5-
import os
6-
import time
7-
8-
import numpy as np
9-
import tensorflow as tf
10-
import tensorlayer as tl
11-
from PIL import Image
12-
from tensorlayer.layers import *
13-
"""Reimplementation of the TensorFlow official CIFAR-10 CNN tutorials:
3+
"""Reimplementation of the TensorFlow official CIFAR-10 CNN tutorials.
144
155
- 1. This model has 1,068,298 paramters, after few hours of training with GPU,
166
accurcy of 86% was found.
@@ -46,7 +36,18 @@
4636
Reading images from disk and distorting them can use a non-trivial amount
4737
of processing time. To prevent these operations from slowing down training,
4838
we run them inside 16 separate threads which continuously fill a TensorFlow queue.
39+
4940
"""
41+
42+
import io
43+
import os
44+
import time
45+
import numpy as np
46+
import tensorflow as tf
47+
import tensorlayer as tl
48+
from PIL import Image
49+
from tensorlayer.layers import *
50+
5051
model_file_name = "model_cifar10_tfrecord.ckpt"
5152
resume = False # load model, resume from previous checkpoint?
5253

@@ -71,7 +72,7 @@ def data_to_tfrecord(images, labels, filename):
7172
print("%s exists" % filename)
7273
return
7374
print("Converting data into %s ..." % filename)
74-
cwd = os.getcwd()
75+
# cwd = os.getcwd()
7576
writer = tf.python_io.TFRecordWriter(filename)
7677
for index, img in enumerate(images):
7778
img_raw = img.tobytes()

example/tutorial_frozenlake_dqn.py

Lines changed: 25 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,4 @@
1-
import random
2-
import time
3-
4-
import gym
5-
import matplotlib.pyplot as plt
6-
import numpy as np
7-
import tensorflow as tf
8-
import tensorlayer as tl
9-
from tensorlayer.layers import *
10-
""" Q-Network Q(a, s) - TD Learning, Off-Policy, e-Greedy Exploration (GLIE)
1+
"""Q-Network Q(a, s) - TD Learning, Off-Policy, e-Greedy Exploration (GLIE).
112
123
Q(S, A) <- Q(S, A) + alpha * (R + lambda * Q(newS, newA) - Q(S, A))
134
delta_w = R + lambda * Q(newS, newA)
@@ -18,20 +9,31 @@
189
CN: https://zhuanlan.zhihu.com/p/25710327
1910
2011
Note: Policy Network has been proved to be better than Q-Learning, see tutorial_atari_pong.py
12+
13+
# The FrozenLake v0 environment
14+
https://gym.openai.com/envs/FrozenLake-v0
15+
The agent controls the movement of a character in a grid world. Some tiles of
16+
the grid are walkable, and others lead to the agent falling into the water.
17+
Additionally, the movement direction of the agent is uncertain and only partially
18+
depends on the chosen direction. The agent is rewarded for finding a walkable
19+
path to a goal tile.
20+
SFFF (S: starting point, safe)
21+
FHFH (F: frozen surface, safe)
22+
FFFH (H: hole, fall to your doom)
23+
HFFG (G: goal, where the frisbee is located)
24+
The episode ends when you reach the goal or fall in a hole. You receive a reward
25+
of 1 if you reach the goal, and zero otherwise.
26+
2127
"""
22-
## The FrozenLake v0 environment
23-
# https://gym.openai.com/envs/FrozenLake-v0
24-
# The agent controls the movement of a character in a grid world. Some tiles of
25-
# the grid are walkable, and others lead to the agent falling into the water.
26-
# Additionally, the movement direction of the agent is uncertain and only partially
27-
# depends on the chosen direction. The agent is rewarded for finding a walkable
28-
# path to a goal tile.
29-
# SFFF (S: starting point, safe)
30-
# FHFH (F: frozen surface, safe)
31-
# FFFH (H: hole, fall to your doom)
32-
# HFFG (G: goal, where the frisbee is located)
33-
# The episode ends when you reach the goal or fall in a hole. You receive a reward
34-
# of 1 if you reach the goal, and zero otherwise.
28+
29+
import time
30+
import gym
31+
import matplotlib.pyplot as plt
32+
import numpy as np
33+
import tensorflow as tf
34+
import tensorlayer as tl
35+
from tensorlayer.layers import *
36+
3537
env = gym.make('FrozenLake-v0')
3638

3739

example/tutorial_frozenlake_q_table.py

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
1-
import time
1+
"""Q-Table learning algorithm.
22
3-
import gym
4-
import numpy as np
5-
"""Q-Table learning algorithm, non deep learning - TD Learning, Off-Policy, e-Greedy Exploration
3+
Non deep learning - TD Learning, Off-Policy, e-Greedy Exploration
64
75
Q(S, A) <- Q(S, A) + alpha * (R + lambda * Q(newS, newA) - Q(S, A))
86
@@ -12,8 +10,13 @@
1210
1311
EN: https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0#.5m3361vlw
1412
CN: https://zhuanlan.zhihu.com/p/25710327
13+
1514
"""
1615

16+
import time
17+
import gym
18+
import numpy as np
19+
1720
## Load the environment
1821
env = gym.make('FrozenLake-v0')
1922
render = False # display the game environment

example/tutorial_generate_text.py

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
#! /usr/bin/python
22
# -*- coding: utf-8 -*-
3-
4-
# Copyright 2016 TensorLayer. All Rights Reserved.
3+
# Copyright 2018 TensorLayer. All Rights Reserved.
54
#
65
# Licensed under the Apache License, Version 2.0 (the "License");
76
# you may not use this file except in compliance with the License.
@@ -14,15 +13,15 @@
1413
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1514
# See the License for the specific language governing permissions and
1615
# limitations under the License.
17-
# ==============================================================================
18-
"""Example of Synced sequence input and output.
16+
"""
17+
Example of Synced sequence input and output.
18+
1919
Generate text using LSTM.
2020
2121
Data: https://github.com/zsdonghao/tensorlayer/tree/master/example/data/
2222
2323
"""
2424

25-
import os
2625
import re
2726
import time
2827

@@ -154,7 +153,6 @@ def main_restore_embedding_layer():
154153
load_params = tl.files.load_npz(name=model_file_name + '.npz')
155154

156155
x = tf.placeholder(tf.int32, shape=[batch_size])
157-
y_ = tf.placeholder(tf.int32, shape=[batch_size, 1])
158156

159157
emb_net = tl.layers.EmbeddingInputlayer(inputs=x, vocabulary_size=vocabulary_size, embedding_size=embedding_size, name='embedding_layer')
160158

@@ -369,9 +367,10 @@ def loss_fn(outputs, targets, batch_size, sequence_length):
369367

370368
if __name__ == '__main__':
371369
sess = tf.InteractiveSession()
372-
"""Restore a pretrained embedding matrix."""
370+
# Restore a pretrained embedding matrix
373371
# main_restore_embedding_layer()
374-
"""How to generate text from a given context."""
372+
373+
# How to generate text from a given context
375374
main_lstm_generate_text()
376375

377376
#

example/tutorial_image_preprocess.py

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,13 @@
1-
import time
2-
3-
import numpy as np
4-
import tensorflow as tf
5-
import tensorlayer as tl
6-
from tensorlayer.prepro import *
7-
"""
8-
Data Augmentation by numpy, scipy, threading and queue.
1+
"""Data Augmentation by numpy, scipy, threading and queue.
92
103
Alternatively, we can use TFRecord to preprocess data,
114
see `tutorial_cifar10_tfrecord.py` for more details.
5+
126
"""
137

8+
import time
9+
import tensorlayer as tl
10+
1411
X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False)
1512

1613

0 commit comments

Comments
 (0)