Releases: tensorlayer/TensorLayer
TensorLayer 1.8.4rc0 ~ 1
TL Models - Provides pre-trained VGG16, SqueezeNet and MobileNetV1 in one line of code (by @lgarithm @zsdonghao), more models will be provided soon!
- Classify ImageNet classes, see tutorial_models_mobilenetv1.py
 
    >>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get the whole model
    >>> net = tl.models.MobileNetV1(x)
    >>> # restore pre-trained parameters
    >>> sess = tf.InteractiveSession()
    >>> net.restore_params(sess)
    >>> # use for inferencing
    >>> probs = tf.nn.softmax(net.outputs)- Extract features and Train a classifier with 100 classes
 
    >>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get model without the last layer
    >>> cnn = tl.models.MobileNetV1(x, end_with='reshape')
    >>> # add one more layer
    >>> net = Conv2d(cnn, 100, (1, 1), (1, 1), name='out')
    >>> net = FlattenLayer(net, name='flatten')
    >>> # initialize all parameters
    >>> sess = tf.InteractiveSession()
    >>> tl.layers.initialize_global_variables(sess)
    >>> # restore pre-trained parameters
    >>> cnn.restore_params(sess)
    >>> # train your own classifier (only update the last layer)
    >>> train_params = tl.layers.get_variables_with_name('out')- Reuse model
 
    >>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get network without the last layer
    >>> net1 = tl.models.MobileNetV1(x1, end_with='reshape')
    >>> # reuse the parameters with different input
    >>> net2 = tl.models.MobileNetV1(x2, end_with='reshape', reuse=True)
    >>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
    >>> sess = tf.InteractiveSession()
    >>> net1.restore_params(sess)TensorLayer 1.8.3
This release focuses on model compression and acceleration, feel free to discuss here.
New APIs
TenaryDenseLayer,TenaryConv2d,DorefaDenseLayer,DorefaConv2dfor Tenary Weight Net and DoReFa-Net (by @XJTUWYD)BinaryDenseLayer,BinaryConv2d,SignLayer,ScaleLayerfor BinaryNet (by @zsdonghao)tl.act.htanhfor BinaryNet (by @zsdonghao)GlobalMeanPool3d,GlobalMaxPool3d(by @zsdonghao)ZeroPad1d,ZeroPad2d,ZeroPad3d(by @zsdonghao)
New Updates
- Fixed bug of 
tl.utils.predict#426 56335c5 (by @xionghhcs) - Enable skip biases in 
Conv3dLayer, the same as beta and gamma inBatchNormLayer7a5b258 (by @lllcho) 
New Examples
- SqueezeNet (ImageNet). AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, see tutorial_squeezenet.py (by @zsdonghao)
 - BinaryNet. Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, see tutorial_binarynet_mnist_cnn.py (by @zsdonghao)
 - Tenary Weight Network, see mnist cifar10. (by @XJTUWYD)
 - DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients, see mnist cifar10. (by @XJTUWYD)
 
New Discussion
TensorLayer 1.8.3rc0
New Updates
- Fixed bug of 
tl.utils.predict#426 56335c5 (by @xionghhcs) - Enable skip biases in 
Conv3dLayer, the same as beta and gamma inBatchNormLayer7a5b258 (by @lllcho) 
New Examples
- SqueezeNet (ImageNet). AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, see tutorial_squeezenet.py (by @zsdonghao)
 - BinaryNet (MNIST). Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, see tutorial_binarynet_mnist_cnn.py (by @zsdonghao)
 
TensorLayer 1.8.2
As this version is more stable, we highly recommend users to update to this version.
Functions
- Binary Neural Network (by @zsdonghao)
 
This is an experimental API package for building Binary Nets. We are using matrix multiplication rather than add-minus and bit-count operation at the moment. Therefore, these APIs would not speed up the inferencing, for production, you can train model via TensorLayer and deploy the model into other customized C/C++ implementation (We probably provide users an extra C/C++ binary net framework that can load model from TensorLayer).
Note that, these experimental APIs can be changed in the future
- Load the Street View House Numbers (SVHN) dataset in 1 line of code (by @zsdonghao)
 - Load Fashion-MNIST in 1 line of code (by @AutuanLiu)
 SeparableConv2dwhich performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. WhileDepthwiseConv2dperforms depthwise convolution only, which allow us to add batch normalization between depthwise and pointwise convolution. (by @zsdonghao)
Updates
- Use 
__all__control the import of all files, see 26d7b40 - Update logging 08a1199 (by @lllcho)
 - Add doc on how to build docker image (by @AutuanLiu )
 
Bug Fix
- Fixed bug of RNN (by @nebulaV )
 
Maintain documentation by @lgarithm @luomai @wagamamaz @zsdonghao
TensorLayer 1.8.1
We highly recommend users to update to 1.8.1:
Updates
- Implement unit tests for layers (by @zsdonghao)
 - Update basic layer for tracking parameters, layer outputs and drop probabilities of previous layer (by @zsdonghao @luomai @lgarithm )
 - Fix bug for RNN layer with 
n_layer>1(by @nebulaV) - Remove the use of global variables and fix critical bugs (by @luomai @zsdonghao)
 
TensorLayer 1.8.0
We recommend users to update and report bugs or issues.
Features
- Experimentally support the Command-Line-Interface (CLI) module. (@luomai @lgarithm)
- Support cli: 
tl trainwhich can bootstrap a GPU/CPU parallel training job. 
 - Support cli: 
 - Use logging instead of print() to output logs. (by @luomai @lgarithm)
 - Update dropout implementation of RNN layers (by @nebulaV)
 - Layers support slicing and iterating:
 
>>> x = tf.placeholder("float32", [None, 100])
>>> n = tl.layers.InputLayer(x, name='in')
>>> n = tl.layers.DenseLayer(n, 80, name='d1')
>>> n = tl.layers.DenseLayer(n, 80, name='d2')
>>> print(n)
... Last layer is: DenseLayer (d2) [None, 80]The outputs can be sliced as follow:
>>> n2 = n[:, :30]
>>> print(n2)
... Last layer is: Layer (d2) [None, 30]The outputs of all layers can be iterated as follow:
>>> for l in n:
>>>    print(l)
... Tensor("d1/Identity:0", shape=(?, 80), dtype=float32)
... Tensor("d2/Identity:0", shape=(?, 80), dtype=float32)APIs
- Simplify 
DeformableConv2dLayerintoDeformableConv2d(by @zsdonghao) - Merge 
tl.opsintotl.utils(by @luomai) DeConv2dnot longer requireout_sizefor TensorFlow 1.3+ (by @zsdonghao)ElementwiseLayersupports activation (by @zsdonghao)DepthwiseConv2dsupports rate 91e5824 (by @zsdonghao)GroupConv2d#363 6ee4bca (by @Windaway)
Others
- Address codebase issues suggested by codacy (by @luomai @zsdonghao @lgarithm)
 - Optimize the layers folder structure. (by @zsdonghao @luomai)
 - Many documentation fixes and improvements (by @zsdonghao @luomai @lgarithm)
 - Mini contribution guide in 5 lines. (by @lgarithm @luomai)
 - Setup many CI tests. (@lgarithm @luomai)
 
TensorLayer 1.8.0rc
This is a pre-release version. We recommend users to update and report bugs or issues.
Features
- Experimentally support the Command-Line-Interface (CLI) module. (@luomai @lgarithm)
- Support cli: 
tl trainwhich can bootstrap a GPU/CPU parallel training job. 
 - Support cli: 
 - Use logging instead of print() to output logs. (by @luomai @lgarithm)
 - Update dropout implementation of RNN layers (by @nebulaV)
 
APIs
- Simplify 
DeformableConv2dLayerintoDeformableConv2d(by @zsdonghao) - Merge 
tl.opsintotl.utils(by @luomai) DeConv2dnot longer requireout_sizefor TensorFlow 1.3+ (by @zsdonghao)
Others
- Address codebase issues suggested by codacy (by @luomai @zsdonghao @lgarithm)
 - Optimize the layers folder structure. (by @zsdonghao @luomai)
 - Many documentation fixes and improvements (by @zsdonghao @luomai @lgarithm)
 - Mini contribution guide in 5 lines. (by @lgarithm @luomai)
 - Setup many CI tests. (@lgarithm @luomai)
 
Maintain TensorLayer 1.7.4
This release includes the following:
- New Update
- Distributed Training Inception V3 : Modified optimizer and learning rate to match the paper b8b3d9c (by @jorgemf )
 - Travis test (by @jorgemf )
 - Format core library using yapf 92c861a (by @luomai )
 - Use 
stack_bidirectional_dynamic_rnnfor multi-layersBiDynamicRNN36ff5f9 (by @matthew-z) tl.vis.save_imagesupports grey scale image cbbd0b7 (by @zsdonghao)- Update layer API for issue #289 with backlink to documentation 4866017 (by @DEKHTIARJonathan)
 
 
TensorLayer 1.7.3
This release includes the following:
- 
New Support
- Official docker (by @lgarithm)
 - Travis test (by @lgarithm)
 
 - 
New Example
 - 
New Update
- Fixed bug of 
AtrousConv2dLayerwhile print act #269 3beebb9 (by @zsdonghao) - Fixed bug of 
deconv2d_bilinear_upsampling_initializerwith np.float32 #271 abc99e7 (by @zsdonghao) - Changed utf8 to utf-8 (by @cnglen and @lgarithm)
 
 - Fixed bug of 
 
TensorLayer 1.7.2
This release includes the following:
- 
News
- TensorPort team starts to support distributed training of TensorLayer, referring to this discussion : issues 243.
 - A Chinese Book is coming soon:《深度学习:一起玩转TensorLayer》(Deep Learning : Play with TensorLayer). 电子工业出版社 (Publishing House of Electronics Industry, PHEI).
 - An interview of TensorLayer: Hao Dong and Luo Mai on TensorLayer and the Chinese Deep Learning Community.
 
 - 
New Support
- Distributed training APIs contributed by TensorPort. See tiny example and tl.distributed (alpha version) (by @jorgemf).
 - tl.prepro.rgb_to_hsv: converts an RGB image to an HSV image (by @zsdonghao).
 - tl.prepro.hsv_to_rgb: converts an HSV image to an RGB image (by @zsdonghao).
 - tl.prepro.adjust_hsv: adjusts the hue of an RGB image (by @zsdonghao).
 - tl.files.load_voc_dataset: supports test sets of 2007 and 2012 (by @zsdonghao).
 
 - 
New Update
- tl.layers.initialize_rnn_state: adds the 
feed_dictargument forinitialize_rnn_state(by @Tbabm). 
 - tl.layers.initialize_rnn_state: adds the