Skip to content

Commit c09d44c

Browse files
authored
Merge branch 'master' into Update_release
2 parents 3ca6166 + f70c278 commit c09d44c

File tree

15 files changed

+266
-58
lines changed

15 files changed

+266
-58
lines changed

.circleci/config.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ jobs:
44

55
###################################################################################
66
# TEST BUILDS with TensorLayer installed from Source - NOT PUSHED TO DOCKER HUB #
7+
78
###################################################################################
89

910
test_sources_py2_cpu:

.codacy.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
---
33
engines:
44
bandit:
5-
enabled: false # FIXME: make it work
5+
enabled: false # FIXME: make it works
66
exclude_paths:
77
- scripts/*
88
- setup.py

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ The following table shows the training speeds of [VGG16](http://www.robots.ox.ac
145145
| Mode | Lib | Data Format | Max GPU Memory Usage(MB) |Max CPU Memory Usage(MB) | Avg CPU Memory Usage(MB) | Runtime (sec) |
146146
| :-------: | :-------------: | :-----------: | :-----------------: | :-----------------: | :-----------------: | :-----------: |
147147
| AutoGraph | TensorFlow 2.0 | channel last | 11833 | 2161 | 2136 | 74 |
148-
| | Tensorlayer 2.0 | channel last | 11833 | 2187 | 2169 | 76 |
148+
| | TensorLayer 2.0 | channel last | 11833 | 2187 | 2169 | 76 |
149149
| Graph | Keras | channel last | 8677 | 2580 | 2576 | 101 |
150150
| Eager | TensorFlow 2.0 | channel last | 8723 | 2052 | 2024 | 97 |
151151
| | TensorLayer 2.0 | channel last | 8723 | 2010 | 2007 | 95 |

README.rst

Lines changed: 14 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -17,52 +17,20 @@ to build real-world AI applications. TensorLayer is awarded the 2017
1717
Best Open Source Software by the `ACM Multimedia
1818
Society <http://www.acmmm.org/2017/mm-2017-awardees/>`__.
1919

20-
Why another deep learning library: TensorLayer
21-
==============================================
22-
23-
As deep learning practitioners, we have been looking for a library that
24-
can address various development purposes. This library is easy to adopt
25-
by providing diverse examples, tutorials and pre-trained models. Also,
26-
it allow users to easily fine-tune TensorFlow; while being suitable for
27-
production deployment. TensorLayer aims to satisfy all these purposes.
28-
It has three key features:
29-
30-
- **Simplicity** : TensorLayer lifts the low-level dataflow interface
31-
of TensorFlow to *high-level* layers / models. It is very easy to
32-
learn through the rich `example
33-
codes <https://github.com/tensorlayer/awesome-tensorlayer>`__
34-
contributed by a wide community.
35-
- **Flexibility** : TensorLayer APIs are transparent: it does not
36-
mask TensorFlow from users; but leaving massive hooks that help
37-
*low-level tuning* and *deep customization*.
38-
- **Zero-cost Abstraction** : TensorLayer can achieve the *full
39-
power* of TensorFlow. The following table shows the training speeds
40-
of classic models using TensorLayer and native TensorFlow on a Titan
41-
X Pascal GPU.
42-
43-
+---------------+-----------------+-----------------+-----------------+
44-
| | CIFAR-10 | PTB LSTM | Word2Vec |
45-
+===============+=================+=================+=================+
46-
| TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
47-
+---------------+-----------------+-----------------+-----------------+
48-
| TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
49-
+---------------+-----------------+-----------------+-----------------+
50-
51-
TensorLayer stands at a unique spot in the library landscape. Other
52-
wrapper libraries like Keras and TFLearn also provide high-level
53-
abstractions. They, however, often hide the underlying engine from
54-
users, which make them hard to customize and fine-tune. On the contrary,
55-
TensorLayer APIs are generally flexible and transparent. Users often
56-
find it easy to start with the examples and tutorials, and then dive
57-
into TensorFlow seamlessly. In addition, TensorLayer does not create
58-
library lock-in through native supports for importing components from
59-
Keras, TFSlim and TFLearn.
60-
61-
TensorLayer has a fast growing usage among top researchers and
62-
engineers, from universities like Imperial College London, UC Berkeley,
63-
Carnegie Mellon University, Stanford University, and University of
64-
Technology of Compiegne (UTC), and companies like Google, Microsoft,
65-
Alibaba, Tencent, Xiaomi, and Bloomberg.
20+
Design Features
21+
=================
22+
23+
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
24+
25+
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
26+
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
27+
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
28+
29+
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
30+
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
31+
making it easy to learn while being flexible enough to cope with complex AI tasks.
32+
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
33+
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
6634

6735
Install
6836
=======

docs/modules/activation.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ For more complex activation, TensorFlow API will be required.
3535
sign
3636
hard_tanh
3737
pixel_wise_softmax
38+
mish
3839

3940
Ramp
4041
------
@@ -68,6 +69,10 @@ Pixel-wise softmax
6869
--------------------
6970
.. autofunction:: pixel_wise_softmax
7071

72+
mish
73+
---------
74+
.. autofunction:: mish
75+
7176
Parametric activation
7277
------------------------------
7378
See ``tensorlayer.layers``.

docs/user/contributing.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,10 @@ For TensorLayer 1.x, it was actively developed and maintained by the following p
4040
- **Hao Dong** (`@zsdonghao <https://github.com/zsdonghao>`_) - `<https://zsdonghao.github.io>`_
4141
- **Jonathan Dekhtiar** (`@DEKHTIARJonathan <https://github.com/DEKHTIARJonathan>`_) - `<https://www.jonathandekhtiar.eu>`_
4242
- **Luo Mai** (`@luomai <https://github.com/luomai>`_) - `<http://www.doc.ic.ac.uk/~lm111/>`_
43+
- **Pan Wang** (`@FerociousPanda <http://github.com/FerociousPanda>`_) - `<http://github.com/FerociousPanda>`_ (UI)
4344
- **Simiao Yu** (`@nebulaV <https://github.com/nebulaV>`_) - `<https://nebulav.github.io>`_
4445

46+
4547
Numerous other contributors can be found in the `Github Contribution Graph <https://github.com/tensorlayer/tensorlayer/graphs/contributors>`_.
4648

4749

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
#! /usr/bin/python
2+
# -*- coding: utf-8 -*-
3+
4+
import tensorlayer as tl
5+
from tensorlayer.layers import (Input, Conv2d, Flatten, Dense, MaxPool2d)
6+
from tensorlayer.models import Model
7+
from tensorlayer.files import maybe_download_and_extract
8+
import numpy as np
9+
import tensorflow as tf
10+
11+
filename = 'ckpt_parameters.zip'
12+
url_score = 'https://media.githubusercontent.com/media/tensorlayer/pretrained-models/master/models/'
13+
14+
# download weights
15+
down_file = tl.files.maybe_download_and_extract(
16+
filename=filename, working_directory='model/', url_source=url_score, extract=True
17+
)
18+
19+
model_file = 'model/ckpt_parameters'
20+
21+
# ckpt to npz, rename_key used to match TL naming rule
22+
tl.files.ckpt_to_npz_dict(model_file, rename_key=True)
23+
weights = np.load('model.npz', allow_pickle=True)
24+
25+
# View the parameters and weights shape
26+
for key in weights.keys():
27+
print(key, weights[key].shape)
28+
29+
30+
# build model
31+
def create_model(inputs_shape):
32+
W_init = tl.initializers.truncated_normal(stddev=5e-2)
33+
W_init2 = tl.initializers.truncated_normal(stddev=0.04)
34+
ni = Input(inputs_shape)
35+
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, name='conv1_1')(ni)
36+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1_1')(nn)
37+
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv1_2')(nn)
38+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1_2')(nn)
39+
40+
nn = Conv2d(128, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2_1')(nn)
41+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2_1')(nn)
42+
nn = Conv2d(128, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2_2')(nn)
43+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2_2')(nn)
44+
45+
nn = Conv2d(256, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv3_1')(nn)
46+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool3_1')(nn)
47+
nn = Conv2d(256, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv3_2')(nn)
48+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool3_2')(nn)
49+
50+
nn = Conv2d(512, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv4_1')(nn)
51+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool4_1')(nn)
52+
nn = Conv2d(512, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv4_2')(nn)
53+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool4_2')(nn)
54+
55+
nn = Flatten(name='flatten')(nn)
56+
nn = Dense(1000, act=None, W_init=W_init2, name='output')(nn)
57+
58+
M = Model(inputs=ni, outputs=nn, name='cnn')
59+
return M
60+
61+
62+
net = create_model([None, 224, 224, 3])
63+
# loaded weights whose name is not found in network's weights will be skipped.
64+
# If ckpt has the same naming rule as TL, We can restore the model with tl.files.load_and_assign_ckpt(model_dir=, network=, skip=True)
65+
tl.files.load_and_assign_npz_dict(network=net, skip=True)
66+
67+
# you can use the following code to view the restore the model parameters.
68+
net_weights_name = [w.name for w in net.all_weights]
69+
for i in range(len(net_weights_name)):
70+
print(net_weights_name[i], net.all_weights[net_weights_name.index(net_weights_name[i])])

examples/reinforcement_learning/tutorial_atari_pong.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,15 @@
77
Pixels” which is a minimalistic implementation of deep reinforcement learning by
88
using python-numpy and OpenAI gym environment.
99
The code here is the reimplementation of Karpathy's Blog by using TensorLayer.
10-
Compare with Karpathy's code, we store observation for a batch, he store
11-
observation for a episode only, they store gradients instead. (so we will use
10+
Compare with Karpathy's code, we store observation for a batch, but he store
11+
observation for only one episode and gradients. (so we will use
1212
more memory if the observation is very large.)
13-
FEEL FREE TO JOIN US !
13+
1414
TODO
1515
-----
1616
- update grads every step rather than storing all observation!
1717
18+
1819
References
1920
------------
2021
- http://karpathy.github.io/2016/05/31/rl/

setup.cfg

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ based_on_style=google
2323
# The number of columns to use for indentation.
2424
indent_width = 4
2525

26-
# The column limit.
26+
# The column limit. (larger than usual)
2727
column_limit=120
2828

2929
# Place each dictionary entry onto its own line.
@@ -76,4 +76,4 @@ no_spaces_around_selected_binary_operators = True
7676
allow_multiline_lambdas = True
7777

7878
SPLIT_PENALTY_FOR_ADDED_LINE_SPLIT = 10
79-
SPLIT_PENALTY_AFTER_OPENING_BRACKET = 500
79+
SPLIT_PENALTY_AFTER_OPENING_BRACKET = 500

tensorlayer/activation.py

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@
1919
'htanh',
2020
'hard_tanh',
2121
'pixel_wise_softmax',
22+
'mish',
2223
]
2324

2425

@@ -339,6 +340,25 @@ def pixel_wise_softmax(x, name='pixel_wise_softmax'):
339340
return tf.nn.softmax(x)
340341

341342

343+
def mish(x):
344+
"""Mish activation function.
345+
346+
Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019]<https://arxiv.org/abs/1908.08681>
347+
348+
Parameters
349+
----------
350+
x : Tensor
351+
input.
352+
353+
Returns
354+
-------
355+
Tensor
356+
A ``Tensor`` in the same type as ``x``.
357+
358+
"""
359+
return x * tf.math.tanh(tf.math.softplus(x))
360+
361+
342362
# Alias
343363
lrelu = leaky_relu
344364
lrelu6 = leaky_relu6

0 commit comments

Comments
 (0)