Skip to content

Commit d5b6ac7

Browse files
Merge branch 'master' into reinforcement-learning
2 parents 97529f0 + 08f4e8d commit d5b6ac7

File tree

23 files changed

+376
-104
lines changed

23 files changed

+376
-104
lines changed

.circleci/config.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ jobs:
44

55
###################################################################################
66
# TEST BUILDS with TensorLayer installed from Source - NOT PUSHED TO DOCKER HUB #
7+
78
###################################################################################
89

910
test_sources_py2_cpu:

.codacy.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
---
33
engines:
44
bandit:
5-
enabled: false # FIXME: make it work
5+
enabled: false # FIXME: make it works
66
exclude_paths:
77
- scripts/*
88
- setup.py

CHANGELOG.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,36 @@ To release a new version, please update the changelog as followed:
8585

8686
### Contributors
8787

88+
## [2.2.2] - 2020-04-26
89+
90+
TensorLayer 2.2.2 is a maintenance release.
91+
92+
### Added
93+
94+
- Reinforcement learning(#1065)
95+
- Mish activation(#1068)
96+
97+
### Changed
98+
99+
### Dependencies Update
100+
101+
### Deprecated
102+
103+
### Fixed
104+
105+
- Fix README.
106+
- Fix package info.
107+
108+
### Removed
109+
110+
### Security
111+
112+
### Contributors
113+
114+
- @zsdonghao
115+
- @quantumiracle(1065)
116+
- @Laicheng0830(#1068)
117+
88118
## [2.2.1] - 2020-01-14
89119

90120
TensorLayer 2.2.1 is a maintenance release.
@@ -591,6 +621,7 @@ To many PR for this update, please check [here](https://github.com/tensorlayer/t
591621
@zsdonghao @luomai @DEKHTIARJonathan
592622

593623
[Unreleased]: https://github.com/tensorlayer/tensorlayer/compare/2.0....master
624+
[2.2.2]: https://github.com/tensorlayer/tensorlayer/compare/2.2.1...2.2.2
594625
[2.2.1]: https://github.com/tensorlayer/tensorlayer/compare/2.2.0...2.2.1
595626
[2.2.0]: https://github.com/tensorlayer/tensorlayer/compare/2.1.0...2.2.0
596627
[2.1.0]: https://github.com/tensorlayer/tensorlayer/compare/2.0.2...2.1.0

README.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
[![Documentation Status](https://readthedocs.org/projects/tensorlayer/badge/)](https://tensorlayer.readthedocs.io/)
1313
[![Build Status](https://travis-ci.org/tensorlayer/tensorlayer.svg?branch=master)](https://travis-ci.org/tensorlayer/tensorlayer)
1414
[![Downloads](http://pepy.tech/badge/tensorlayer)](http://pepy.tech/project/tensorlayer)
15+
[![Downloads](https://pepy.tech/badge/tensorlayer/week)](https://pepy.tech/project/tensorlayer/week)
1516
[![Docker Pulls](https://img.shields.io/docker/pulls/tensorlayer/tensorlayer.svg)](https://hub.docker.com/r/tensorlayer/tensorlayer/)
1617
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/d6b118784e25435498e7310745adb848)](https://www.codacy.com/app/tensorlayer/tensorlayer)
1718

@@ -20,19 +21,17 @@
2021
<!--- [![Documentation Status](https://readthedocs.org/projects/tensorlayercn/badge/)](https://tensorlayercn.readthedocs.io/)
2122
<!--- [![PyUP Updates](https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg)](https://pyup.io/repos/github/tensorlayer/tensorlayer/) --->
2223

23-
[TensorLayer](https://tensorlayer.readthedocs.io) is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build complex AI models. TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049).
24-
TensorLayer can also be found at [iHub](https://code.ihub.org.cn/projects/328) and [Gitee](https://gitee.com/organizations/TensorLayer).
24+
[TensorLayer](https://tensorlayer.readthedocs.io) is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass [tutorials](https://github.com/tensorlayer/tensorlayer/blob/master/examples/reinforcement_learning/README.md) and [applications](https://github.com/tensorlayer). TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049).
25+
This project can also be found at [iHub](https://code.ihub.org.cn/projects/328) and [Gitee](https://gitee.com/organizations/TensorLayer).
2526

2627
# News
2728

28-
🔥 Reinforcement Learning Model Zoo: [Low-level APIs for Research](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) and [High-level APIs for Production](https://github.com/tensorlayer/RLzoo)
29+
🔥 Reinforcement Learning Zoo: [Low-level APIs](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) for professional usage, [High-level APIs](https://github.com/tensorlayer/RLzoo) for simple usage, and a corresponding [Springer textbook](http://springer.com/gp/book/9789811540943)
2930

3031
🔥 [Sipeed Maxi-EMC](https://github.com/sipeed/Maix-EMC): Run TensorLayer models on the **low-cost AI chip** (e.g., K210) (Alpha Version)
3132

3233
<!-- 🔥 [NNoM](https://github.com/majianjia/nnom): Run TensorLayer quantized models on the **MCU** (e.g., STM32) (Coming Soon) -->
3334

34-
🔥 [Free GPU and storage resources](https://github.com/fangde/FreeGPU): TensorLayer users can access to free GPU and storage resources donated by SurgicalAI. Thank you SurgicalAI!
35-
3635
# Design Features
3736

3837
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
@@ -145,7 +144,7 @@ The following table shows the training speeds of [VGG16](http://www.robots.ox.ac
145144
| Mode | Lib | Data Format | Max GPU Memory Usage(MB) |Max CPU Memory Usage(MB) | Avg CPU Memory Usage(MB) | Runtime (sec) |
146145
| :-------: | :-------------: | :-----------: | :-----------------: | :-----------------: | :-----------------: | :-----------: |
147146
| AutoGraph | TensorFlow 2.0 | channel last | 11833 | 2161 | 2136 | 74 |
148-
| | Tensorlayer 2.0 | channel last | 11833 | 2187 | 2169 | 76 |
147+
| | TensorLayer 2.0 | channel last | 11833 | 2187 | 2169 | 76 |
149148
| Graph | Keras | channel last | 8677 | 2580 | 2576 | 101 |
150149
| Eager | TensorFlow 2.0 | channel last | 8723 | 2052 | 2024 | 97 |
151150
| | TensorLayer 2.0 | channel last | 8723 | 2010 | 2007 | 95 |

README.rst

Lines changed: 14 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -17,52 +17,20 @@ to build real-world AI applications. TensorLayer is awarded the 2017
1717
Best Open Source Software by the `ACM Multimedia
1818
Society <http://www.acmmm.org/2017/mm-2017-awardees/>`__.
1919

20-
Why another deep learning library: TensorLayer
21-
==============================================
22-
23-
As deep learning practitioners, we have been looking for a library that
24-
can address various development purposes. This library is easy to adopt
25-
by providing diverse examples, tutorials and pre-trained models. Also,
26-
it allow users to easily fine-tune TensorFlow; while being suitable for
27-
production deployment. TensorLayer aims to satisfy all these purposes.
28-
It has three key features:
29-
30-
- **Simplicity** : TensorLayer lifts the low-level dataflow interface
31-
of TensorFlow to *high-level* layers / models. It is very easy to
32-
learn through the rich `example
33-
codes <https://github.com/tensorlayer/awesome-tensorlayer>`__
34-
contributed by a wide community.
35-
- **Flexibility** : TensorLayer APIs are transparent: it does not
36-
mask TensorFlow from users; but leaving massive hooks that help
37-
*low-level tuning* and *deep customization*.
38-
- **Zero-cost Abstraction** : TensorLayer can achieve the *full
39-
power* of TensorFlow. The following table shows the training speeds
40-
of classic models using TensorLayer and native TensorFlow on a Titan
41-
X Pascal GPU.
42-
43-
+---------------+-----------------+-----------------+-----------------+
44-
| | CIFAR-10 | PTB LSTM | Word2Vec |
45-
+===============+=================+=================+=================+
46-
| TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
47-
+---------------+-----------------+-----------------+-----------------+
48-
| TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
49-
+---------------+-----------------+-----------------+-----------------+
50-
51-
TensorLayer stands at a unique spot in the library landscape. Other
52-
wrapper libraries like Keras and TFLearn also provide high-level
53-
abstractions. They, however, often hide the underlying engine from
54-
users, which make them hard to customize and fine-tune. On the contrary,
55-
TensorLayer APIs are generally flexible and transparent. Users often
56-
find it easy to start with the examples and tutorials, and then dive
57-
into TensorFlow seamlessly. In addition, TensorLayer does not create
58-
library lock-in through native supports for importing components from
59-
Keras, TFSlim and TFLearn.
60-
61-
TensorLayer has a fast growing usage among top researchers and
62-
engineers, from universities like Imperial College London, UC Berkeley,
63-
Carnegie Mellon University, Stanford University, and University of
64-
Technology of Compiegne (UTC), and companies like Google, Microsoft,
65-
Alibaba, Tencent, Xiaomi, and Bloomberg.
20+
Design Features
21+
=================
22+
23+
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
24+
25+
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
26+
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
27+
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
28+
29+
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
30+
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
31+
making it easy to learn while being flexible enough to cope with complex AI tasks.
32+
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
33+
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
6634

6735
Install
6836
=======

docs/modules/activation.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ For more complex activation, TensorFlow API will be required.
3535
sign
3636
hard_tanh
3737
pixel_wise_softmax
38+
mish
3839

3940
Ramp
4041
------
@@ -68,6 +69,10 @@ Pixel-wise softmax
6869
--------------------
6970
.. autofunction:: pixel_wise_softmax
7071

72+
mish
73+
---------
74+
.. autofunction:: mish
75+
7176
Parametric activation
7277
------------------------------
7378
See ``tensorlayer.layers``.

docs/modules/rein.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
API - Reinforcement Learning
22
==============================
33

4-
Reinforcement Learning.
4+
We provide two reinforcement learning libraries:
5+
6+
- `RL-tutorial <https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning>`__ for professional users with low-level APIs.
7+
- `RLzoo <https://rlzoo.readthedocs.io/en/latest/>`__ for simple usage with high-level APIs.
58

69
.. automodule:: tensorlayer.rein
710

docs/user/contributing.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,10 @@ For TensorLayer 1.x, it was actively developed and maintained by the following p
4040
- **Hao Dong** (`@zsdonghao <https://github.com/zsdonghao>`_) - `<https://zsdonghao.github.io>`_
4141
- **Jonathan Dekhtiar** (`@DEKHTIARJonathan <https://github.com/DEKHTIARJonathan>`_) - `<https://www.jonathandekhtiar.eu>`_
4242
- **Luo Mai** (`@luomai <https://github.com/luomai>`_) - `<http://www.doc.ic.ac.uk/~lm111/>`_
43+
- **Pan Wang** (`@FerociousPanda <http://github.com/FerociousPanda>`_) - `<http://github.com/FerociousPanda>`_ (UI)
4344
- **Simiao Yu** (`@nebulaV <https://github.com/nebulaV>`_) - `<https://nebulav.github.io>`_
4445

46+
4547
Numerous other contributors can be found in the `Github Contribution Graph <https://github.com/tensorlayer/tensorlayer/graphs/contributors>`_.
4648

4749

examples/data_process/tutorial_fast_affine_transform.py

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@
88
import multiprocessing
99
import time
1010

11-
import cv2
1211
import numpy as np
13-
import tensorflow as tf
1412

13+
import cv2
14+
import tensorflow as tf
1515
import tensorlayer as tl
1616

1717
# tl.logging.set_verbosity(tl.logging.DEBUG)
@@ -21,11 +21,18 @@
2121

2222
def create_transformation_matrix():
2323
# 1. Create required affine transformation matrices
24-
M_rotate = tl.prepro.affine_rotation_matrix(angle=20)
25-
M_flip = tl.prepro.affine_horizontal_flip_matrix(prob=1)
26-
M_shift = tl.prepro.affine_shift_matrix(wrg=0.1, hrg=0, h=h, w=w)
27-
M_shear = tl.prepro.affine_shear_matrix(x_shear=0.2, y_shear=0)
28-
M_zoom = tl.prepro.affine_zoom_matrix(zoom_range=0.8)
24+
## fixed
25+
# M_rotate = tl.prepro.affine_rotation_matrix(angle=20)
26+
# M_flip = tl.prepro.affine_horizontal_flip_matrix(prob=1)
27+
# M_shift = tl.prepro.affine_shift_matrix(wrg=0.1, hrg=0, h=h, w=w)
28+
# M_shear = tl.prepro.affine_shear_matrix(x_shear=0.2, y_shear=0)
29+
# M_zoom = tl.prepro.affine_zoom_matrix(zoom_range=0.8)
30+
## random
31+
M_rotate = tl.prepro.affine_rotation_matrix(angle=(-20, 20))
32+
M_flip = tl.prepro.affine_horizontal_flip_matrix(prob=0.5)
33+
M_shift = tl.prepro.affine_shift_matrix(wrg=(-0.1,0.1), hrg=(-0.1,0.1), h=h, w=w)
34+
M_shear = tl.prepro.affine_shear_matrix(x_shear=(-0.2,0.2), y_shear=(-0.2,0.2))
35+
M_zoom = tl.prepro.affine_zoom_matrix(zoom_range=(0.8,1.2))
2936

3037
# 2. Combine matrices
3138
# NOTE: operations are applied in a reversed order (i.e., rotation is performed first)
@@ -55,7 +62,8 @@ def example2():
5562
st = time.time()
5663
for _ in range(100): # Repeat 100 times and compute the averaged speed
5764
transform_matrix = create_transformation_matrix()
58-
result = tl.prepro.affine_transform_cv2(image, transform_matrix) # Transform the image using a single operation
65+
result = tl.prepro.affine_transform_cv2(image, transform_matrix, border_mode='replicate') # Transform the image using a single operation
66+
tl.vis.save_image(result, '_result_fast_{}.png'.format(_))
5967
print("apply all transforms once took %fs for each image" % ((time.time() - st) / 100)) # usually 50x faster
6068
tl.vis.save_image(result, '_result_fast.png')
6169

@@ -98,6 +106,7 @@ def _map_fn(image_path, target):
98106
st = time.time()
99107
for img, target in dataset:
100108
n_step += 1
109+
pass
101110
assert n_step == n_epoch * n_data / batch_size
102111
print("dataset APIs took %fs for each image" % ((time.time() - st) / batch_size / n_step)) # CPU ~ 100%
103112

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
#! /usr/bin/python
2+
# -*- coding: utf-8 -*-
3+
4+
import tensorlayer as tl
5+
from tensorlayer.layers import (Input, Conv2d, Flatten, Dense, MaxPool2d)
6+
from tensorlayer.models import Model
7+
from tensorlayer.files import maybe_download_and_extract
8+
import numpy as np
9+
import tensorflow as tf
10+
11+
filename = 'ckpt_parameters.zip'
12+
url_score = 'https://media.githubusercontent.com/media/tensorlayer/pretrained-models/master/models/'
13+
14+
# download weights
15+
down_file = tl.files.maybe_download_and_extract(
16+
filename=filename, working_directory='model/', url_source=url_score, extract=True
17+
)
18+
19+
model_file = 'model/ckpt_parameters'
20+
21+
# ckpt to npz, rename_key used to match TL naming rule
22+
tl.files.ckpt_to_npz_dict(model_file, rename_key=True)
23+
weights = np.load('model.npz', allow_pickle=True)
24+
25+
# View the parameters and weights shape
26+
for key in weights.keys():
27+
print(key, weights[key].shape)
28+
29+
30+
# build model
31+
def create_model(inputs_shape):
32+
W_init = tl.initializers.truncated_normal(stddev=5e-2)
33+
W_init2 = tl.initializers.truncated_normal(stddev=0.04)
34+
ni = Input(inputs_shape)
35+
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, name='conv1_1')(ni)
36+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1_1')(nn)
37+
nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv1_2')(nn)
38+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1_2')(nn)
39+
40+
nn = Conv2d(128, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2_1')(nn)
41+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2_1')(nn)
42+
nn = Conv2d(128, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2_2')(nn)
43+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2_2')(nn)
44+
45+
nn = Conv2d(256, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv3_1')(nn)
46+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool3_1')(nn)
47+
nn = Conv2d(256, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv3_2')(nn)
48+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool3_2')(nn)
49+
50+
nn = Conv2d(512, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv4_1')(nn)
51+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool4_1')(nn)
52+
nn = Conv2d(512, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv4_2')(nn)
53+
nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool4_2')(nn)
54+
55+
nn = Flatten(name='flatten')(nn)
56+
nn = Dense(1000, act=None, W_init=W_init2, name='output')(nn)
57+
58+
M = Model(inputs=ni, outputs=nn, name='cnn')
59+
return M
60+
61+
62+
net = create_model([None, 224, 224, 3])
63+
# loaded weights whose name is not found in network's weights will be skipped.
64+
# If ckpt has the same naming rule as TL, We can restore the model with tl.files.load_and_assign_ckpt(model_dir=, network=, skip=True)
65+
tl.files.load_and_assign_npz_dict(network=net, skip=True)
66+
67+
# you can use the following code to view the restore the model parameters.
68+
net_weights_name = [w.name for w in net.all_weights]
69+
for i in range(len(net_weights_name)):
70+
print(net_weights_name[i], net.all_weights[net_weights_name.index(net_weights_name[i])])

0 commit comments

Comments
 (0)