Skip to content

TensorLayer 1.8.4

Choose a tag to compare

@zsdonghao zsdonghao released this 13 Apr 15:51
· 1435 commits to master since this release
65e4029

New Support

  • Release experimental APIs to download and visualize MPII dataset (Pose Estimation) in one line of code (by @zsdonghao)
>>> import pprint
>>> import tensorlayer as tl
>>> img_train_list, ann_train_list, img_test_list, ann_test_list = tl.files.load_mpii_pose_dataset()
>>> image = tl.vis.read_image(img_train_list[0])
>>> tl.vis.draw_mpii_pose_to_image(image, ann_train_list[0], 'image.png')
>>> pprint.pprint(ann_train_list[0])
  • Release tl.models API - Provides pre-trained VGG16, SqueezeNet and MobileNetV1 in one line of code (by @lgarithm @zsdonghao), more models will be provided soon!

Classify ImageNet classes, see tutorial_models_mobilenetv1.py

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get the whole model
>>> net = tl.models.MobileNetV1(x)
>>> # restore pre-trained parameters
>>> sess = tf.InteractiveSession()
>>> net.restore_params(sess)
>>> # use for inferencing
>>> probs = tf.nn.softmax(net.outputs)

Extract features and Train a classifier with 100 classes

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get model without the last layer
>>> cnn = tl.models.MobileNetV1(x, end_with='reshape')
>>> # add one more layer
>>> net = Conv2d(cnn, 100, (1, 1), (1, 1), name='out')
>>> net = FlattenLayer(net, name='flatten')
>>> # initialize all parameters
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> # restore pre-trained parameters
>>> cnn.restore_params(sess)
>>> # train your own classifier (only update the last layer)
>>> train_params = tl.layers.get_variables_with_name('out')

Reuse model

>>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get network without the last layer
>>> net1 = tl.models.MobileNetV1(x1, end_with='reshape')
>>> # reuse the parameters with different input
>>> net2 = tl.models.MobileNetV1(x2, end_with='reshape', reuse=True)
>>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
>>> sess = tf.InteractiveSession()
>>> net1.restore_params(sess)

New Example

  • TensorFlow Dataset API for VOC dataset augmentation here (by @zsdonghao)

New Update

  • Update tl.iterate.minibatch to support list input (by @zsdonghao)

API Change Log

@DEKHTIARJonathan give a list of API change log here #479

    1. Layer API Change

As it is an absolute central class, one change here are leading to changes everywhere.
If any modification is done here, it should be done with a deprecation warning.

## Before
layer = tl.layers.BatchNormLayer(layer = layer)
layer = tl.layers.PReluLayer(layer  = layer)

## Now
layer = tl.layers.BatchNormLayer(prev_layer = layer)
layer = tl.layers.PReluLayer(prev_layer= layer)

Commit introduced this change: b2e6ccc

Why the API was changed ? As you may guess, just this change lead to many projects raising errors and needing to be updated. We struggle to have tutorials and examples around with TL and this change is not helping with backward compatibility.

    1. DeConv2d API Change
## Before
tl.layers.DeConv2d(layer=layer,  n_out_channel = 16)

## Now
tl.layers.DeConv2d(layer=layer,  n_filter = 16)

Here we have two problems:

  1. This Layer has now an inconsistent API with the rest of the TL library (this layer use layer instead of prev_layer).
  2. Again, no deprecation warning with the changes from n_out_channel to n_filter which may immediately make most GANs/AEs not working without a fix.
    1. Reuse Variable Scope

You have correctly mentioned a deprecation warning, however it would be better to mention an appropriate fix and not just say "it's deprecated, deal with it now !"

I give you an example:

with tf.variable_scope("my_scope", reuse=reuse) as scope:
    # tl.layers.set_name_reuse(reuse) # deprecated
    if reuse:
        scope.reuse_variables()

Quite easy to add inside the deprecation warning and now it provides a simple solution to fix the issue.

    1. No mention in the Changelog of an API change of the ReshapeLayer
## Before
layer = tl.layers.ReshapeLayer(
    layer,
    shape = [-1, 256, 256, 3]
)

## Now
layer = tl.layers.ReshapeLayer(
    layer,
    shape = (-1, 256, 256, 3) # Must use a tuple, a list is not accepted anymore
)